WorldWideScience

Sample records for normal auditory system

  1. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  2. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    -speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Measurement of normal auditory ossicles by high-resolusion CT with application of normal criteria to disease cases

    International Nuclear Information System (INIS)

    Hara, Jyoko

    1988-01-01

    The purposes of this study were to define criteria for the normal position of ossicles and to apply them in patients with rhinolaryngologically or pathologically confirmed diseases. Ossicles were measured on high-resolution CT images of 300 middle ears, including 241 normal ears and 59 diseased ears, in a total of 203 subjects. Angles A, B, and C to the baseline between the most lateral margins of bilateral internal auditory canals, and distance ratio b/a were defined as measurement items. Normal angles A, B, and C and distance ratio b/a ranged from 19 deg to 59 deg, 101 deg to 145 deg, 51 deg to 89 deg, and 0.49 to 0.51, respectively. Based on these criteria, all of these items were within the normal range in 30/34 (88.2 %) ears for otitis media and mastoiditis. One or more items showed far abnormal values (standard deviation; more than 3) in 5/7 (71.4 %) ears for cholesteatoma and 4/4 (100 %) ears for external ear anomaly. These normal measurements may aid in evaluating the position of auditory ossicles especially in the case of cholesteatoma and auditory ossicle abnormality. (Namekawa, K.)

  4. Measurement of normal auditory ossicles by high-resolusion CT with application of normal criteria to disease cases

    Energy Technology Data Exchange (ETDEWEB)

    Hara, Jyoko

    1988-09-01

    The purposes of this study were to define criteria for the normal position of ossicles and to apply them in patients with rhinolaryngologically or pathologically confirmed diseases. Ossicles were measured on high-resolution CT images of 300 middle ears, including 241 normal ears and 59 diseased ears, in a total of 203 subjects. Angles A, B, and C to the baseline between the most lateral margins of bilateral internal auditory canals, and distance ratio b/a were defined as measurement items. Normal angles A, B, and C and distance ratio b/a ranged from 19 deg to 59 deg, 101 deg to 145 deg, 51 deg to 89 deg, and 0.49 to 0.51, respectively. Based on these criteria, all of these items were within the normal range in 30/34 (88.2 %) ears for otitis media and mastoiditis. One or more items showed far abnormal values (standard deviation; more than 3) in 5/7 (71.4 %) ears for cholesteatoma and 4/4 (100 %) ears for external ear anomaly. These normal measurements may aid in evaluating the position of auditory ossicles especially in the case of cholesteatoma and auditory ossicle abnormality. (Namekawa, K.).

  5. Motor-related signals in the auditory system for listening and learning.

    Science.gov (United States)

    Schneider, David M; Mooney, Richard

    2015-08-01

    In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A loudspeaker-based room auralization system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2009-01-01

    Most research on basic auditory function has been conducted in anechoic or almost anechoic environments. The knowledge derived from these experiments cannot directly be transferred to reverberant environments. In order to investigate the auditory signal processing of reverberant sounds....... This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception......) and measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....

  7. A loudspeaker-based room auralisation (LoRA) system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    Most research on understanding the signal processing of the auditory system has been realized in anechoic or almost anechoic environments. The knowledge derived from these experiments cannot be directly transferred to reverberant environments. In order to investigate the auditory signal processing...... are utilized to realise highly authentic room reverberation. This system aims at providing a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. An overall description...

  8. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  9. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  10. Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.

    Science.gov (United States)

    Wetherby, Amy Miller; And Others

    1981-01-01

    The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)

  11. Impaired precision, but normal retention, of auditory sensory ("echoic") memory information in schizophrenia.

    Science.gov (United States)

    Javitt, D C; Strous, R D; Grochowski, S; Ritter, W; Cowan, N

    1997-05-01

    Working memory is the type of memory that allows one to hold information in mind while working on a task or problem. The present study investigated attention-independent auditory sensory ("echoic") memory in 18 schizophrenic participants and 17 controls. Schizophrenic participants showed impaired delayed tone matching performance in comparison with controls. However, when groups were matched for performance at 1 s by varying the difficulty of the task across groups, schizophrenic participants showed normal retention of information as reflected in normal tone matching performance. These findings demonstrate that schizophrenic may be in the sensitivity of the system rather than the duration for which memory traces were retained.

  12. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    Science.gov (United States)

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  13. DESCRIPTION OF BRAINSTEM AUDITORY EVOKED RESPONSES (AIR AND BONE CONDUCTION IN CHILDREN WITH NORMAL HEARING

    Directory of Open Access Journals (Sweden)

    A. V. Pashkov

    2014-01-01

    Full Text Available Diagnosis of hearing level in small children with conductive hearing loss associated with congenital craniofacial abnormalities, particularly with agenesis of external ear and external auditory meatus is a pressing issue. Conventional methods of assessing hearing in the first years of life, i. e. registration of brainstem auditory evoked responses to acoustic stimuli in the event of air conduction, does not give an indication of the auditory analyzer’s condition due to potential conductive hearing loss in these patients. This study was aimed at assessing potential of diagnosing the auditory analyzer’s function with registering brainstem auditory evoked responses (BAERs to acoustic stimuli transmitted by means of a bone vibrator. The study involved 17 children aged 3–10 years with normal hearing. We compared parameters of registering brainstem auditory evoked responses (peak V depending on the type of stimulus transmission (air/bone in children with normal hearing. The data on thresholds of the BAERs registered to acoustic stimuli in the event of air and bone conduction obtained in this study are comparable; hearing thresholds in the event of acoustic stimulation by means of a bone vibrator correlates with the results of the BAERs registered to the stimuli transmitted by means of air conduction earphones (r = 0.9. High correlation of thresholds of BAERs to the stimuli transmitted by means of a bone vibrator with thresholds of BAERs registered when air conduction earphones were used helps to assess auditory analyzer’s condition in patients with any form of conductive hearing loss.  

  14. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... in reverberant environments. Each part of the early incoming sound to the listener was auralized with either higher-order Ambisonic (HOA) or using a single loudspeaker. The late incoming sound was auralized with a specific algorithm in order to provide a diffuse reverberation with minimal coloration artifacts...... assessed the impact of the auralization technique used for the early incoming sound (HOA or single loudspeaker) on speech intelligibility. A listening test showed that speech intelligibility experiments can be reliably conducted with the LoRA system with both techniques. The second evaluation investigated...

  15. Normal time course of auditory recognition in schizophrenia, despite impaired precision of the auditory sensory ("echoic") memory code.

    Science.gov (United States)

    March, L; Cienfuegos, A; Goldbloom, L; Ritter, W; Cowan, N; Javitt, D C

    1999-02-01

    Prior studies have demonstrated impaired precision of processing within the auditory sensory memory (ASM) system in schizophrenia. This study used auditory backward masking to evaluate the degree to which such deficits resulted from impaired overall precision versus premature decay of information within the short-term auditory store. ASM performance was evaluated in 14 schizophrenic participants and 16 controls. Schizophrenic participants were severely impaired in their ability to match tones following delay. However, when no-mask performance was equated across participants, schizophrenic participants were no more susceptible to the effects of backward maskers than were controls. Thus, despite impaired precision of ASM performance, schizophrenic participants showed no deficits in the time course over which short-term representations could be used within the ASM system.

  16. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  17. A Persian version of the sustained auditory attention capacity test and its results in normal children

    Directory of Open Access Journals (Sweden)

    Sanaz Soltanparast

    2013-03-01

    Full Text Available Background and Aim: Sustained attention refers to the ability to maintain attention in target stimuli over a sustained period of time. This study was conducted to develop a Persian version of the sustained auditory attention capacity test and to study its results in normal children.Methods: To develop the Persian version of the sustained auditory attention capacity test, like the original version, speech stimuli were used. The speech stimuli consisted of one hundred monosyllabic words consisting of a 20 times random of and repetition of the words of a 21-word list of monosyllabic words, which were randomly grouped together. The test was carried out at comfortable hearing level using binaural, and diotic presentation modes on 46 normal children of 7 to 11 years of age of both gender.Results: There was a significant difference between age, and an average of impulsiveness error score (p=0.004 and total score of sustained auditory attention capacity test (p=0.005. No significant difference was revealed between age, and an average of inattention error score and attention reduction span index. Gender did not have a significant impact on various indicators of the test.Conclusion: The results of this test on a group of normal hearing children confirmed its ability to measure sustained auditory attention capacity through speech stimuli.

  18. Time course of auditory streaming: Do CI users differ from normal-hearing listeners?

    Directory of Open Access Journals (Sweden)

    Martin eBöckmann-Barthel

    2014-07-01

    Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.

  19. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  20. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  1. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  2. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  3. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  4. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  5. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  6. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    Directory of Open Access Journals (Sweden)

    Francisco Jose Alvarez

    Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  7. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  8. Normalization of auditory evoked potential and visual evoked potential in patients with idiot savant.

    Science.gov (United States)

    Chen, X; Zhang, M; Wang, J; Lou, F; Liang, J

    1999-03-01

    To investigate the variations of auditory evoked potentials (AEP) and visual evoked potentials (VEP) of patients with idiot savant (IS) syndrome. Both AEP and VEP were recorded from 7 patients with IS syndrome, 21 mentally retarded (MR) children without the syndrome and 21 normally age-matched controls, using a Dantec concerto SEEG-16 BEAM instrument. Both AEP and VEP of MR group showed significantly longer latencies (P1 and P2 latencies of AEP, P savant syndrome presented normalized AEP and VEP.

  9. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  10. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  11. Abnormal Auditory Gain in Hyperacusis: Investigation with a Computational Model

    Directory of Open Access Journals (Sweden)

    Peter U. Diehl

    2015-07-01

    Full Text Available Hyperacusis is a frequent auditory disorder that is characterized by abnormal loudness perception where sounds of relatively normal volume are perceived as too loud or even painfully loud. As Hyperacusis patients show decreased loudness discomfort levels (LDLs and steeper loudness growth functions, it has been hypothesized that hyperacusis might be caused by an increase in neuronal response gain in the auditory system. Moreover, since about 85% of hyperacusis patients also experience tinnitus, the conditions might be caused by a common mechanism. However, the mechanisms that give rise to hyperacusis have remained unclear.Here we have used a computational model of the auditory system to investigate candidate mechanisms for hyperacusis. Assuming that perceived loudness is proportional to the summed activity of all auditory nerve fibers, the model was tuned to reproduce normal loudness perception. We then evaluated a variety of potential hyperacusis gain mechanisms by determining their effects on model equal-loudness contours and comparing the results to the LDLs of hyperacusis patients with normal hearing thresholds. Hyperacusis was best accounted for by an increase in nonlinear gain in the central auditory system. Good fits to the average patient LDLs were obtained for a general increase in gain that affected all frequency channels to the same degree, and also for a frequency-specific gain increase in the high-frequency range. Moreover, the gain needed to be applied after subtraction of spontaneous activity of the auditory nerve, which is in contrast to current theories of tinnitus generation based on amplification of spontaneous activity. Hyperacusis and tinnitus might therefore be caused by different changes in neuronal processing in the central auditory system.

  12. Comparison of Social Interaction between Cochlear-Implanted Children with Normal Intelligence Undergoing Auditory Verbal Therapy and Normal-Hearing Children: A Pilot Study.

    Science.gov (United States)

    Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh

    2018-04-01

    A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.

  13. N-Back auditory test performance in normal individuals

    Directory of Open Access Journals (Sweden)

    Vanessa Tomé Gonçalves

    Full Text Available Abstract The working memory construct refers to the capacity to maintain information for a limited time. Objectives: To devise stimuli and adapt the 5-back test and to verify the effect of age in normal Brazilian individuals. Methods: 31 healthy adults (15 young adults and 16 older adults were evaluated by batteries of auditory stimuli to verify the inter-group differences (age effect in working memory span, total correct answers and intrusions, and the intra-group effect of type of stimulus. Results: There was no intra-group stimulus effect. Individuals from both groups processed di and tri-syllables similarly. No difference between groups (no age effect was observed for any N-Back parameters: total score, span, number of intrusions, in either di or tri-syllable presentation. Conclusion: the processing capacity of 5 elements in phonological working memory was not affected by age.

  14. Congenital Deafness Reduces, But Does Not Eliminate Auditory Responsiveness in Cat Extrastriate Visual Cortex.

    Science.gov (United States)

    Land, Rüdiger; Radecke, Jan-Ole; Kral, Andrej

    2018-04-01

    Congenital deafness not only affects the development of the auditory cortex, but also the interrelation between the visual and auditory system. For example, congenital deafness leads to visual modulation of the deaf auditory cortex in the form of cross-modal plasticity. Here we asked, whether congenital deafness additionally affects auditory modulation in the visual cortex. We demonstrate that auditory activity, which is normally present in the lateral suprasylvian visual areas in normal hearing cats, can also be elicited by electrical activation of the auditory system with cochlear implants. We then show that in adult congenitally deaf cats auditory activity in this region was reduced when tested with cochlear implant stimulation. However, the change in this area was small and auditory activity was not completely abolished despite years of congenital deafness. The results document that congenital deafness leads not only to changes in the auditory cortex but also affects auditory modulation of visual areas. However, the results further show a persistence of fundamental cortical sensory functional organization despite congenital deafness. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Multislice CT of the auditory ossicles and ossicular ligaments. Delineation of normal anatomy and diagnosis of congenital anomaly

    International Nuclear Information System (INIS)

    Matsumoto, Shigeru; Tozaki, Hiromitu; Miyazaki, Hidemi

    2001-01-01

    By using four detector rows with 0.5 mm collimation, high resolution isotropic voxel data throughout the middle ear can be obtained with Multislice Helical CT (MSCT). The purpose of this study is to evaluate the usefulness of MSCT in demonstrating the auditory ossicles and ossicular ligaments and in the diagnosis of congenital ossicular anomalies. Thirty normal middle ears and 23 ear of 20 patients with suspicious congenital ossicular anomalies were examined. Axial images and multiplanar images were reconstructed. In the normal group, the images were evaluated based on scores for the visualization of the anatomical structure of the auditory ossicles and ossicular ligaments. In the group with anomalies, the findings suggesting ossicular anomalies were referenced and the prevalence was conjectured. Visualization of the auditory ossicles and ossicular ligaments was 98.3%-100% and 78.3%-100%, respectively. Congenital ossicular anomalies were detected in 20 ears (87.0%). MSCT is an accurate method for demonstrating minute and complicated 3D structures of the middle ear, and is found to be a technique of choice for diagnosis of ossicular anomalies. (author)

  16. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  17. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    Science.gov (United States)

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  18. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Goycoolea, Marcos; Mena, Ismael; Neubauer, Sonia

    2004-01-01

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  19. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    Science.gov (United States)

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  20. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  1. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  2. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.

  3. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    that the use of spectral cues remained plausible. Simulations of auditory-nerve representations of the complex tones further suggested that a spectrotemporal mechanism combining precise timing information across auditory channels might best account for the behavioral data. Overall, this work provides insights...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...... of reduced frequency selectivity on binaural pitch perception, the ability to accurately process the temporal fine structure (TFS) of sounds at the output of the cochlear filters was found to be essential for perceiving binaural pitch. Monaural TFS processing also played a major and independent role...

  4. Fitting and verification of frequency modulation systems on children with normal hearing.

    Science.gov (United States)

    Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa

    2014-06-01

    Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5

  5. Reduced auditory efferent activity in childhood selective mutism.

    Science.gov (United States)

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  6. A Telehealth System for Remote Auditory Evoked Potential Monitoring

    OpenAIRE

    Millan, Jorge; Yunda, Leonardo

    2013-01-01

    A portable, Internet-based EEG/Auditory Evoked Potential (AEP) monitoring system was developed for remote electrophysiological studies during sleep. The system records EEG/AEP simultaneously at the subject?s home for increased comfort and flexibility. The system provides simultaneous recording and remote viewing of EEG, EMG and EOG waves and allows on-line averaging of auditory evoked potentials. The design allows the recording of all major AEP components (brainstem, middle and late latency E...

  7. Dichotic auditory-verbal memory in adults with cerebro-vascular accident

    Directory of Open Access Journals (Sweden)

    Samaneh Yekta

    2014-01-01

    Full Text Available Background and Aim: Cerebrovascular accident is a neurological disorder involves central nervous system. Studies have shown that it affects the outputs of behavioral auditory tests such as dichotic auditory verbal memory test. The purpose of this study was to compare this memory test results between patients with cerebrovascular accident and normal subjects.Methods: This cross-sectional study was conducted on 20 patients with cerebrovascular accident aged 50-70 years and 20 controls matched for age and gender in Emam Khomeini Hospital, Tehran, Iran. Dichotic auditory verbal memory test was performed on each subject.Results: The mean score in the two groups was significantly different (p<0.0001. The results indicated that the right-ear score was significantly greater than the left-ear score in normal subjects (p<0.0001 and in patients with right hemisphere lesion (p<0.0001. The right-ear and left-ear scores were not significantly different in patients with left hemisphere lesion (p=0.0860.Conclusion: Among other methods, Dichotic auditory verbal memory test is a beneficial test in assessing the central auditory nervous system of patients with cerebrovascular accident. It seems that it is sensitive to the damages occur following temporal lobe strokes.

  8. Auditory signal design for automatic number plate recognition system

    NARCIS (Netherlands)

    Heydra, C.G.; Jansen, R.J.; Van Egmond, R.

    2014-01-01

    This paper focuses on the design of an auditory signal for the Automatic Number Plate Recognition system of Dutch national police. The auditory signal is designed to alert police officers of suspicious cars in their proximity, communicating priority level and location of the suspicious car and

  9. Thresholds of Tone Burst Auditory Brainstem Responses for Infants and Young Children with Normal Hearing in Taiwan

    Directory of Open Access Journals (Sweden)

    Chung-Yi Lee

    2007-10-01

    Conclusion: Based on the published research and our study, we suggest setting the normal criterion levels for infants and young children in Taiwan of the tone burst auditory brainstem response to air-conducted tones as 30 dB nHL for 500 and 1000 Hz, and 25 dB nHL for 2000 and 4000 Hz.

  10. Empathy and the somatotopic auditory mirror system in humans

    NARCIS (Netherlands)

    Gazzola, Valeria; Aziz-Zadeh, Lisa; Keysers, Christian

    2006-01-01

    How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions [1, 2]. This system might be critical for auditory action understanding and language

  11. Analysis of the Auditory Feedback and Phonation in Normal Voices.

    Science.gov (United States)

    Arbeiter, Mareike; Petermann, Simon; Hoppe, Ulrich; Bohr, Christopher; Doellinger, Michael; Ziethe, Anke

    2018-02-01

    The aim of this study was to investigate the auditory feedback mechanisms and voice quality during phonation in response to a spontaneous pitch change in the auditory feedback. Does the pitch shift reflex (PSR) change voice pitch and voice quality? Quantitative and qualitative voice characteristics were analyzed during the PSR. Twenty-eight healthy subjects underwent transnasal high-speed video endoscopy (HSV) at 8000 fps during sustained phonation [a]. While phonating, the subjects heard their sound pitched up for 700 cents (interval of a fifth), lasting 300 milliseconds in their auditory feedback. The electroencephalography (EEG), acoustic voice signal, electroglottography (EGG), and high-speed-videoendoscopy (HSV) were analyzed to compare feedback mechanisms for the pitched and unpitched condition of the phonation paradigm statistically. Furthermore, quantitative and qualitative voice characteristics were analyzed. The PSR was successfully detected within all signals of the experimental tools (EEG, EGG, acoustic voice signal, HSV). A significant increase of the perturbation measures and an increase of the values of the acoustic parameters during the PSR were observed, especially for the audio signal. The auditory feedback mechanism seems not only to control for voice pitch but also for voice quality aspects.

  12. Brainstem auditory evoked response characteristics in normal-hearing subjects with chronic tinnitus and in non-tinnitus group

    Directory of Open Access Journals (Sweden)

    Shadman Nemati

    2014-06-01

    Full Text Available Background and Aim: While most of the people with tinnitus have some degrees of hearing impairment, a small percent of patients admitted to ear, nose and throat clinics or hearing evaluation centers are those who complain of tinnitus despite having normal hearing thresholds. This study was performed to better understanding of the reasons of probable causes of tinnitus and to investigate possible changes in the auditory brainstem function in normal-hearing patients with chronic tinnitus.Methods: In this comparative cross-sectional, descriptive and analytic study, 52 ears (26 with and 26 without tinnitus were examined. Components of the auditory brainstem response (ABR including wave latencies and wave amplitudes were determined in the two groups and analyzed using appropriate statistical methods.Results: The mean differences between the absolute latencies of waves I, III and V was less than 0.1 ms between the two groups that was not statistically significant. Also, the interpeak latency values of waves I-III, III-V and I-V in both groups had no significant difference. Only, the V/I amplitude ratio in the tinnitus group was significantly higher (p=0.04.Conclusion: The changes observed in amplitude of waves, especially in the latter ones, can be considered as an indication of plastic changes in neuronal activity and its possible role in generation of tinnitus in normal-hearing patients.

  13. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  14. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  15. Insult-induced adaptive plasticity of the auditory system

    Directory of Open Access Journals (Sweden)

    Joshua R Gold

    2014-05-01

    Full Text Available The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighting of connections in neural networks putatively required for optimising performance and behaviour. As an avenue for investigation, studies centred around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple – if not all – levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioural implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism’s competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.

  16. The function of BDNF in the adult auditory system.

    Science.gov (United States)

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv

    2017-03-01

    In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The

  18. Acquired auditory agnosia in childhood and normal sleep electroencephalography subsequently diagnosed as Landau-Kleffner syndrome: a report of three cases.

    Science.gov (United States)

    van Bogaert, Patrick; King, Mary D; Paquier, Philippe; Wetzburger, Catherine; Labasse, Catherine; Dubru, Jean-Marie; Deonna, Thierry

    2013-06-01

      We report three cases of Landau-Kleffner syndrome (LKS) in children (two females, one male) in whom diagnosis was delayed because the sleep electroencephalography (EEG) was initially normal.   Case histories including EEG, positron emission tomography findings, and long-term outcome were reviewed.   Auditory agnosia occurred between the age of 2 years and 3 years 6 months, after a period of normal language development. Initial awake and sleep EEG, recorded weeks to months after the onset of language regression, during a nap period in two cases and during a full night of sleep in the third case, was normal. Repeat EEG between 2 months and 2 years later showed epileptiform discharges during wakefulness and strongly activated by sleep, with a pattern of continuous spike-waves during slow-wave sleep in two patients. Patients were diagnosed with LKS and treated with various antiepileptic regimens, including corticosteroids. One patient in whom EEG became normal on hydrocortisone is making significant recovery. The other two patients did not exhibit a sustained response to treatment and remained severely impaired.   Sleep EEG may be normal in the early phase of acquired auditory agnosia. EEG should be repeated frequently in individuals in whom a firm clinical diagnosis is made to facilitate early treatment. © The Authors. Developmental Medicine & Child Neurology © 2012 Mac Keith Press.

  19. Reduced auditory processing capacity during vocalization in children with Selective Mutism.

    Science.gov (United States)

    Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair

    2007-02-01

    Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.

  20. Anatomy, Physiology and Function of the Auditory System

    Science.gov (United States)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  1. Web-based auditory self-training system for adult and elderly users of hearing aids.

    Science.gov (United States)

    Vitti, Simone Virginia; Blasca, Wanderléia Quinhoneiro; Sigulem, Daniel; Torres Pisa, Ivan

    2015-01-01

    Adults and elderly users of hearing aids suffer psychosocial reactions as a result of hearing loss. Auditory rehabilitation is typically carried out with support from a speech therapist, usually in a clinical center. For these cases, there is a lack of computer-based self-training tools for minimizing the psychosocial impact of hearing deficiency. To develop and evaluate a web-based auditory self-training system for adult and elderly users of hearing aids. Two modules were developed for the web system: an information module based on guidelines for using hearing aids; and an auditory training module presenting a sequence of training exercises for auditory abilities along the lines of the auditory skill steps within auditory processing. We built aweb system using PHP programming language and a MySQL database .from requirements surveyed through focus groups that were conducted by healthcare information technology experts. The web system was evaluated by speech therapists and hearing aid users. An initial sample of 150 patients at DSA/HRAC/USP was defined to apply the system with the inclusion criteria that: the individuals should be over the age of 25 years, presently have hearing impairment, be a hearing aid user, have a computer and have internet experience. They were divided into two groups: a control group (G1) and an experimental group (G2). These patients were evaluated clinically using the HHIE for adults and HHIA for elderly people, before and after system implementation. A third web group was formed with users who were invited through social networks for their opinions on using the system. A questionnaire evaluating hearing complaints was given to all three groups. The study hypothesis considered that G2 would present greater auditory perception, higher satisfaction and fewer complaints than G1 after the auditory training. It was expected that G3 would have fewer complaints regarding use and acceptance of the system. The web system, which was named Sis

  2. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  3. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  4. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia.

    Science.gov (United States)

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia

    2016-11-01

    Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    Science.gov (United States)

    Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions

  6. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    Directory of Open Access Journals (Sweden)

    Nicholas A. Badcock

    2013-02-01

    Full Text Available Background. Auditory event-related potentials (ERPs have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI, attention deficit hyperactivity disorder (ADHD, schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan.Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz and 100 deviant (1200 Hz tones under passive (non-attended and active (attended conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®. These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks and the mismatch negativity (MMN in active and passive listening conditions for each participant.Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21. Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks

  7. Central auditory neurons have composite receptive fields.

    Science.gov (United States)

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-02

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.

  8. Diffusion tractography of the subcortical auditory system in a postmortem human brain

    OpenAIRE

    Sitek, Kevin

    2017-01-01

    The subcortical auditory system is challenging to identify with standard human brain imaging techniques: MRI signal decreases toward the center of the brain as well as at higher resolution, both of which are necessary for imaging small brainstem auditory structures.Using high-resolution diffusion-weighted MRI, we asked:Can we identify auditory structures and connections in high-resolution ex vivo images?Which structures and connections can be mapped in vivo?

  9. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  10. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  11. Comparative Evaluation of Auditory Attention in 7 to 9 Year Old Learning Disabled Students

    Directory of Open Access Journals (Sweden)

    Fereshteh Amiriani

    2011-06-01

    Full Text Available Background and Aim: Learning disability is a term referes to a group of disorders manifesting listening, reading, writing, or mathematical problems. These children mostly have attention difficulties in classroom that leads to many learning problems. In this study we aimed to compare the auditory attention of 7 to 9 year old children with learning disability to non- learning disability age matched normal group.Methods: Twenty seven male 7 to 9 year old students with learning disability and 27 age and sex matched normal conrols were selected with unprobable simple sampling. 27 In order to evaluate auditory selective and divided attention, Farsi versions of speech in noise and dichotic digit test were used respectively.Results: Comparison of mean scores of Farsi versions of speech in noise in both ears of 7 and 8 year-old students in two groups indicated no significant difference (p>0.05 Mean scores of 9 year old controls was significant more than those of the cases only in the right ear (p=0.033. However, no significant difference was observed between mean scores of dichotic digit test assessing the right ear of 9 year-old learning disability and non learning disability students (p>0.05. Moreover, mean scores of 7 and 8 year- old students with learning disability was less than those of their normal peers in the left ear (p>0.05.Conclusion: Selective auditory attention is not affected in the optimal signal to noise ratio, while divided attention seems to be affected by maturity delay of auditory system or central auditory system disorders.

  12. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  14. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  15. Validation of the Emotiv EPOC(®) EEG gaming system for measuring research quality auditory ERPs.

    Science.gov (United States)

    Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants - particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC(®)). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks

  16. Hearing after congenital deafness: central auditory plasticity and sensory deprivation.

    Science.gov (United States)

    Kral, A; Hartmann, R; Tillein, J; Heid, S; Klinke, R

    2002-08-01

    The congenitally deaf cat suffers from a degeneration of the inner ear. The organ of Corti bears no hair cells, yet the auditory afferents are preserved. Since these animals have no auditory experience, they were used as a model for congenital deafness. Kittens were equipped with a cochlear implant at different ages and electro-stimulated over a period of 2.0-5.5 months using a monopolar single-channel compressed analogue stimulation strategy (VIENNA-type signal processor). Following a period of auditory experience, we investigated cortical field potentials in response to electrical biphasic pulses applied by means of the cochlear implant. In comparison to naive unstimulated deaf cats and normal hearing cats, the chronically stimulated animals showed larger cortical regions producing middle-latency responses at or above 300 microV amplitude at the contralateral as well as the ipsilateral auditory cortex. The cortex ipsilateral to the chronically stimulated ear did not show any signs of reduced responsiveness when stimulating the 'untrained' ear through a second cochlear implant inserted in the final experiment. With comparable duration of auditory training, the activated cortical area was substantially smaller if implantation had been performed at an older age of 5-6 months. The data emphasize that young sensory systems in cats have a higher capacity for plasticity than older ones and that there is a sensitive period for the cat's auditory system.

  17. Changes in auditory memory performance following the use of frequency-modulated system in children with suspected auditory processing disorders.

    Science.gov (United States)

    Umat, Cila; Mukari, Siti Z; Ezan, Nurul F; Din, Normah C

    2011-08-01

    To examine the changes in the short-term auditory memory following the use of frequency-modulated (FM) system in children with suspected auditory processing disorders (APDs), and also to compare the advantages of bilateral over unilateral FM fitting. This longitudinal study involved 53 children from Sekolah Kebangsaan Jalan Kuantan 2, Kuala Lumpur, Malaysia who fulfilled the inclusion criteria. The study was conducted from September 2007 to October 2008 in the Department of Audiology and Speech Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia. The children's age was between 7-10 years old, and they were assigned into 3 groups: 15 in the control group (not fitted with FM); 19 in the unilateral; and 19 in the bilateral FM-fitting group. Subjects wore the FM system during school time for 12 weeks. Their working memory (WM), best learning (BL), and retention of information (ROI) were measured using the Rey Auditory Verbal Learning Test at pre-fitting, post (after 12 weeks of FM usage), and at long term (one year after the usage of FM system ended). There were significant differences in the mean WM (p=0.001), BL (p=0.019), and ROI (p=0.005) scores at the different measurement times, in which the mean scores at long-term were consistently higher than at pre-fitting, despite similar performances at the baseline (p>0.05). There was no significant difference in performance between unilateral- and bilateral-fitting groups. The use of FM might give a long-term effect on improving selected short-term auditory memories of some children with suspected APDs. One may not need to use 2 FM receivers to receive advantages on auditory memory performance.

  18. Reversible induction of phantom auditory sensations through simulated unilateral hearing loss.

    Directory of Open Access Journals (Sweden)

    Roland Schaette

    Full Text Available Tinnitus, a phantom auditory sensation, is associated with hearing loss in most cases, but it is unclear if hearing loss causes tinnitus. Phantom auditory sensations can be induced in normal hearing listeners when they experience severe auditory deprivation such as confinement in an anechoic chamber, which can be regarded as somewhat analogous to a profound bilateral hearing loss. As this condition is relatively uncommon among tinnitus patients, induction of phantom sounds by a lesser degree of auditory deprivation could advance our understanding of the mechanisms of tinnitus. In this study, we therefore investigated the reporting of phantom sounds after continuous use of an earplug. 18 healthy volunteers with normal hearing wore a silicone earplug continuously in one ear for 7 days. The attenuation provided by the earplugs simulated a mild high-frequency hearing loss, mean attenuation increased from 30 dB at 3 and 4 kHz. 14 out of 18 participants reported phantom sounds during earplug use. 11 participants presented with stable phantom sounds on day 7 and underwent tinnitus spectrum characterization with the earplug still in place. The spectra showed that the phantom sounds were perceived predominantly as high-pitched, corresponding to the frequency range most affected by the earplug. In all cases, the auditory phantom disappeared when the earplug was removed, indicating a causal relation between auditory deprivation and phantom sounds. This relation matches the predictions of our computational model of tinnitus development, which proposes a possible mechanism by which a stabilization of neuronal activity through homeostatic plasticity in the central auditory system could lead to the development of a neuronal correlate of tinnitus when auditory nerve activity is reduced due to the earplug.

  19. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  20. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  1. The memory systems of children with (central) auditory disorder.

    Science.gov (United States)

    Pires, Mayra Monteiro; Mota, Mailce Borges; Pinheiro, Maria Madalena Canina

    2015-01-01

    This study aims to investigate working, declarative, and procedural memory in children with (central) auditory processing disorder who showed poor phonological awareness. Thirty 9- and 10-year-old children participated in the study and were distributed into two groups: a control group consisting of 15 children with typical development, and an experimental group consisting of 15 children with (central) auditory processing disorder who were classified according to three behavioral tests and who showed poor phonological awareness in the CONFIAS test battery. The memory systems were assessed through the adapted tests in the program E-PRIME 2.0. The working memory was assessed by the Working Memory Test Battery for Children (WMTB-C), whereas the declarative memory was assessed by a picture-naming test and the procedural memory was assessed by means of a morphosyntactic processing test. The results showed that, when compared to the control group, children with poor phonological awareness scored lower in the working, declarative, and procedural memory tasks. The results of this study suggest that in children with (central) auditory processing disorder, phonological awareness is associated with the analyzed memory systems.

  2. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    Science.gov (United States)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  3. Multichannel auditory search: toward understanding control processes in polychotic auditory listening.

    Science.gov (United States)

    Lee, M D

    2001-01-01

    Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.

  4. Automatic detection of frequency changes depends on auditory stimulus intensity.

    Science.gov (United States)

    Salo, S; Lang, A H; Aaltonen, O; Lertola, K; Kärki, T

    1999-06-01

    A cortical cognitive auditory evoked potential, mismatch negativity (MMN), reflects automatic discrimination and echoic memory functions of the auditory system. For this study, we examined whether this potential is dependent on the stimulus intensity. The MMN potentials were recorded from 10 subjects with normal hearing using a sine tone of 1000 Hz as the standard stimulus and a sine tone of 1141 Hz as the deviant stimulus, with probabilities of 90% and 10%, respectively. The intensities were 40, 50, 60, 70, and 80 dB HL for both standard and deviant stimuli in separate blocks. Stimulus intensity had a statistically significant effect on the mean amplitude, rise time parameter, and onset latency of the MMN. Automatic auditory discrimination seems to be dependent on the sound pressure level of the stimuli.

  5. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    Science.gov (United States)

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  6. Changes in Properties of Auditory Nerve Synapses following Conductive Hearing Loss.

    Science.gov (United States)

    Zhuang, Xiaowen; Sun, Wei; Xu-Friedman, Matthew A

    2017-01-11

    Auditory activity plays an important role in the development of the auditory system. Decreased activity can result from conductive hearing loss (CHL) associated with otitis media, which may lead to long-term perceptual deficits. The effects of CHL have been mainly studied at later stages of the auditory pathway, but early stages remain less examined. However, changes in early stages could be important because they would affect how information about sounds is conveyed to higher-order areas for further processing and localization. We examined the effects of CHL at auditory nerve synapses onto bushy cells in the mouse anteroventral cochlear nucleus following occlusion of the ear canal. These synapses, called endbulbs of Held, normally show strong depression in voltage-clamp recordings in brain slices. After 1 week of CHL, endbulbs showed even greater depression, reflecting higher release probability. We observed no differences in quantal size between control and occluded mice. We confirmed these observations using mean-variance analysis and the integration method, which also revealed that the number of release sites decreased after occlusion. Consistent with this, synaptic puncta immunopositive for VGLUT1 decreased in area after occlusion. The level of depression and number of release sites both showed recovery after returning to normal conditions. Finally, bushy cells fired fewer action potentials in response to evoked synaptic activity after occlusion, likely because of increased depression and decreased input resistance. These effects appear to reflect a homeostatic, adaptive response of auditory nerve synapses to reduced activity. These effects may have important implications for perceptual changes following CHL. Normal hearing is important to everyday life, but abnormal auditory experience during development can lead to processing disorders. For example, otitis media reduces sound to the ear, which can cause long-lasting deficits in language skills and verbal

  7. Abnormalities in auditory efferent activities in children with selective mutism.

    Science.gov (United States)

    Muchnik, Chava; Ari-Even Roth, Daphne; Hildesheimer, Minka; Arie, Miri; Bar-Haim, Yair; Henkin, Yael

    2013-01-01

    Two efferent feedback pathways to the auditory periphery may play a role in monitoring self-vocalization: the middle-ear acoustic reflex (MEAR) and the medial olivocochlear bundle (MOCB) reflex. Since most studies regarding the role of auditory efferent activity during self-vocalization were conducted in animals, human data are scarce. The working premise of the current study was that selective mutism (SM), a rare psychiatric disorder characterized by consistent failure to speak in specific social situations despite the ability to speak normally in other situations, may serve as a human model for studying the potential involvement of auditory efferent activity during self-vocalization. For this purpose, auditory efferent function was assessed in a group of 31 children with SM and compared to that of a group of 31 normally developing control children (mean age 8.9 and 8.8 years, respectively). All children exhibited normal hearing thresholds and type A tympanograms. MEAR and MOCB functions were evaluated by means of acoustic reflex thresholds and decay functions and the suppression of transient-evoked otoacoustic emissions, respectively. Auditory afferent function was tested by means of auditory brainstem responses (ABR). Results indicated a significantly higher proportion of children with abnormal MEAR and MOCB function in the SM group (58.6 and 38%, respectively) compared to controls (9.7 and 8%, respectively). The prevalence of abnormal MEAR and/or MOCB function was significantly higher in the SM group (71%) compared to controls (16%). Intact afferent function manifested in normal absolute and interpeak latencies of ABR components in all children. The finding of aberrant efferent auditory function in a large proportion of children with SM provides further support for the notion that MEAR and MOCB may play a significant role in the process of self-vocalization. © 2013 S. Karger AG, Basel.

  8. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  9. Magnetic resonance imaging of the internal auditory canal

    International Nuclear Information System (INIS)

    Daniels, D.L.; Herfkins, R.; Koehler, P.R.; Millen, S.J.; Shaffer, K.A.; Williams, A.L.; Haughton, V.M.

    1984-01-01

    Three patients with exclusively or predominantly intracanalicular neuromas and 5 with presumably normal internal auditory canals were examined with prototype 1.4- or 1.5-tesla magnetic resonance (MR) scanners. MR images showed the 7th and 8th cranial nerves in the internal auditory canal. The intracanalicular neuromas had larger diameter and slightly greater signal strength than the nerves. Early results suggest that minimal enlargement of the nerves can be detected even in the internal auditory canal

  10. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  11. Brainstem auditory evoked potentials in horses

    Directory of Open Access Journals (Sweden)

    Juliana Almeida Nogueira da Gama

    2016-04-01

    Full Text Available ABSTRACT: The brainstem auditory evoked potential (BAEP evaluates the integrity of the auditory pathways to the brainstem. The aim of this study was to evoke BAEPs in 21 clinically normal horses. The animals were sedated with detomidine hydrochloride (0.013mg.kg-1 BW. Earphones were inserted and rarefaction clicks at 90 dB and noise masking at 40 dB were used. After performing the test, the latencies of waves (I, II, III, IV, and V and interpeaks(I-III, III-V, and I-V were identified. The mean latencies of the waves were as follows: wave I, 2.4 ms; wave II, 2.24 ms; wave III, 3.61ms; wave IV, 4.61ms; and wave V, 5.49ms. The mean latencies of the interpeaks were as follows: I-III, 1.37ms; III-V, 1.88ms; and I-V, 3.26ms. This is the first study using BAEPs in horses in Brazil, and the observed latencies will be used as normative data for the interpretation of tests performed on horses with changes related to auditory system or neurologic abnormalities.

  12. Method for Dissecting the Auditory Epithelium (Basilar Papilla) in Developing Chick Embryos.

    Science.gov (United States)

    Levic, Snezana; Yamoah, Ebenezer N

    2016-01-01

    Chickens are an invaluable model for exploring auditory physiology. Similar to humans, the chicken inner ear is morphologically and functionally close to maturity at the time of hatching. In contrast, chicks can regenerate hearing, an ability lost in all mammals, including humans. The extensive morphological, physiological, behavioral, and pharmacological data available, regarding normal development in the chicken auditory system, has driven the progress of the field. The basilar papilla is an attractive model system to study the developmental mechanisms of hearing. Here, we describe the dissection technique for isolating the basilar papilla in developing chick inner ear. We also provide detailed examples of physiological (patch clamping) experiments using this preparation.

  13. From ear to body: the auditory-motor loop in spatial cognition.

    Science.gov (United States)

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.

  14. From ear to body: the auditory-motor loop in spatial cognition

    Directory of Open Access Journals (Sweden)

    Isabelle eViaud-Delmon

    2014-09-01

    Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.

  15. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    in listeners with SNHL, it is likely that HI listeners rely on the enhanced envelope cues to retrieve the pitch of unresolved harmonics. Hence, the relative importance of pitch cues may be altered in HI listeners, whereby envelope cues may be used instead of TFS cues to obtain a similar performance in pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...

  16. The cerebral functional location in normal subjects with Chinese classical national music auditory stimulus

    International Nuclear Information System (INIS)

    Sun Da; Xu Wei; Zhan Hongwei; Liu Hongbiao

    2004-01-01

    Purpose: To detect the cerebral functional location in normal subjects with Chinese classical national music auditory stimulus. Methods: 10 normal young students of the medical collage of Zhejiang University,22-24 years old,5 male and 5 female. The first they underwent a 99mTc-ECD brain imaging during a rest state using a dual detectors gamma camera with fan beam collimators. After 2-4 days they were asked to listen a Chinese classical national music that was played by Erhu and Guzheng for 20 minters. They were also asked to pay special attention to the name of the music, what musical instruments they played and what imagination was opened out in the music. 99mTc-ECD was administered in the first 3 minutes during thy listened the music. The brain imaging was performed in 30-60 minutes after the tracer was administered. Results: To compare the rest state, during listening the Chinese classical national music and paying special attention to the imagination of music the right midtemporal in 6 cases, left midtemporal in 2 cases, right superior temporal in 2 cases, left superior temporal in 6 cases, and right inferior temporal in 2 cases were activated. Among them, dual temporal were activated in 6 cases, right temporal in 3 cases and left temporal in 1 case. It is very interesting that the inferior frontal and/or medial frontal lobes were activated in all 10 subjects, and the activity was markedly higher in frontal than in temporal. Among them dual frontal lobes were activated in 9 subjects, and only right frontal in 1 case. The right superior frontal lobes were activated in 2 cases. The occipital lobes were activated in 4 subjects, and dual occipital in 3 cases, right occipital in 1 case. These 4 subjects stated after listening that they imagined the natural landscape and imagination that was opened out in the music follow the music. Other regions that were activated included parietal lobes (right and left in 1 respectively), pre-cingulated gyms (in 2 cases), and left

  17. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Widespread auditory deficits in tune deafness.

    Science.gov (United States)

    Jones, Jennifer L; Zalewski, Christopher; Brewer, Carmen; Lucker, Jay; Drayna, Dennis

    2009-02-01

    The goal of this study was to investigate auditory function in individuals with deficits in musical pitch perception. We hypothesized that such individuals have deficits in nonspeech areas of auditory processing. We screened 865 randomly selected individuals to identify those who scored poorly on the Distorted Tunes test (DTT), a measure of musical pitch recognition ability. Those who scored poorly were given a comprehensive audiologic examination, and those with hearing loss or other confounding audiologic factors were excluded from further testing. Thirty-five individuals with tune deafness constituted the experimental group. Thirty-four individuals with normal hearing and normal DTT scores, matched for age, gender, handedness, and education, and without overt or reported psychiatric disorders made up the normal control group. Individual and group performance for pure-tone frequency discrimination at 1000 Hz was determined by measuring the difference limen for frequency (DLF). Auditory processing abilities were assessed using tests of pitch pattern recognition, duration pattern recognition, and auditory gap detection. In addition, we evaluated both attention and short- and long-term memory as variables that might influence performance on our experimental measures. Differences between groups were evaluated statistically using Wilcoxon nonparametric tests and t-tests as appropriate. The DLF at 1000 Hz in the group with tune deafness was significantly larger than that of the normal control group. However, approximately one-third of participants with tune deafness had DLFs within the range of performance observed in the control group. Many individuals with tune deafness also displayed a high degree of variability in their intertrial frequency discrimination performance that could not be explained by deficits in memory or attention. Pitch and duration pattern discrimination and auditory gap-detection ability were significantly poorer in the group with tune deafness

  19. Selective attention in normal and impaired hearing.

    Science.gov (United States)

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  20. A neural network model of normal and abnormal auditory information processing.

    Science.gov (United States)

    Du, X; Jansen, B H

    2011-08-01

    The ability of the brain to attenuate the response to irrelevant sensory stimulation is referred to as sensory gating. A gating deficiency has been reported in schizophrenia. To study the neural mechanisms underlying sensory gating, a neuroanatomically inspired model of auditory information processing has been developed. The mathematical model consists of lumped parameter modules representing the thalamus (TH), the thalamic reticular nucleus (TRN), auditory cortex (AC), and prefrontal cortex (PC). It was found that the membrane potential of the pyramidal cells in the PC module replicated auditory evoked potentials, recorded from the scalp of healthy individuals, in response to pure tones. Also, the model produced substantial attenuation of the response to the second of a pair of identical stimuli, just as seen in actual human experiments. We also tested the viewpoint that schizophrenia is associated with a deficit in prefrontal dopamine (DA) activity, which would lower the excitatory and inhibitory feedback gains in the AC and PC modules. Lowering these gains by less than 10% resulted in model behavior resembling the brain activity seen in schizophrenia patients, and replicated the reported gating deficits. The model suggests that the TRN plays a critical role in sensory gating, with the smaller response to a second tone arising from a reduction in inhibition of TH by the TRN. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Neural plasticity expressed in central auditory structures with and without tinnitus

    Directory of Open Access Journals (Sweden)

    Larry E Roberts

    2012-05-01

    Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.

  2. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    Science.gov (United States)

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in

  3. CT virtual endoscopy of the auditory ossicular chain and its preliminary clinical application

    International Nuclear Information System (INIS)

    Wang Dong; Zhang Wanshi; Xiong Minghui; Xu Jiaxing; Yu Min; Xu Changyu

    2000-01-01

    Objective: To evaluate the ability of CT virtual endoscopy (CTVE) in visualization of auditory ossicular chain and its clinical application. Methods: CTVE of auditory ossicular chain was performed on GE HiSpeed CT/i with 1.0 mm slice thickness at pitch 1.0, bone algorithm, 9.6 cm FOV, and 0.1 mm reconstruction interval in 10 normal subjects and 21 patients with middle ear diseases, 14 of them were proved by operation. The threshold values for normal and abnormal auditory ossicular chain were -600--200 HU and 50-300 HU respectively. Results: CTVE could clearly demonstrate the shape, size, and relation of the normal auditory ossicular chain. A visualization rate of malleus, incus, and incudomalleal articulation was 100%, and was 32% for the stapedial foot plate. However, in only 21% could the anterior and posterior crura of stapes be distinguished. Cholesteatoma was found in 12 cases with chronic otitis media, in which CTVE demonstrated varying degrees of destruction of the auditory ossicle. In 1 case with congenital anomaly, ossicle dysplasia was seen. Conclusion: CTVE being a new, non-invasive method for demonstrating the three-dimensional image of auditory ossicular chain is useful in evaluating diseases of the ear, especially the auditory ossicles

  4. Auditory and language outcomes in children with unilateral hearing loss.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Gaboury, Isabelle; Durieux-Smith, Andrée; Coyle, Doug; Whittingham, JoAnne; Nassrallah, Flora

    2018-03-13

    Children with unilateral hearing loss (UHL) are being diagnosed at younger ages because of newborn hearing screening. Historically, they have been considered at risk for difficulties in listening and language development. Little information is available on contemporary cohorts of children identified in the early months of life. We examined auditory and language acquisition outcomes in a contemporary cohort of early-identified children with UHL and compared their outcomes at preschool age with peers with mild bilateral loss and with normal hearing. As part of the Mild and Unilateral Hearing Loss in Children Study, we collected auditory and spoken language outcomes on children with unilateral, bilateral hearing loss and with normal hearing over a four-year period. This report provides a cross-sectional analysis of results at age 48 months. A total of 120 children (38 unilateral and 31 bilateral mild, 51 normal hearing) were enrolled in the study from 2010 to 2015. Children started the study at varying ages between 12 and 36 months of age and were followed until age 36-48 months. The median age of identification of hearing loss was 3.4 months (IQR: 2.0, 5.5) for unilateral and 3.6 months (IQR: 2.7, 5.9) for the mild bilateral group. Families completed an intake form at enrolment to provide baseline child and family-related characteristics. Data on amplification fitting and use were collected via parent questionnaires at each annual assessment interval. This study involved a range of auditory development and language measures. For this report, we focus on the end of follow-up results from two auditory development questionnaires and three standardized speech-language assessments. Assessments included in this report were completed at a median age of 47.8 months (IQR: 38.8, 48.5). Using ANOVA, we examined auditory and language outcomes in children with UHL and compared their scores to children with mild bilateral hearing loss and those with normal hearing. On most

  5. Effect of conductive hearing loss on central auditory function.

    Science.gov (United States)

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  6. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  7. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  8. ERP evaluation of auditory sensory memory systems in adults with intellectual disability.

    Science.gov (United States)

    Ikeda, Kazunari; Hashimoto, Souichi; Hayashi, Akiko; Kanno, Atsushi

    2009-01-01

    Auditory sensory memory stage can be functionally divided into two subsystems; transient-detector system and permanent feature-detector system (Naatanen, 1992). We assessed these systems in persons with intellectual disability by measuring event-related potentials (ERPs) N1 and mismatch negativity (MMN), which reflect the two auditory subsystems, respectively. Added to these, P3a (an ERP reflecting stage after sensory memory) was evaluated. Either synthesized vowels or simple tones were delivered during a passive oddball paradigm to adults with and without intellectual disability. ERPs were recorded from midline scalp sites (Fz, Cz, and Pz). Relative to control group, participants with the disability exhibited greater N1 latency and less MMN amplitude. The results for N1 amplitude and MMN latency were basically comparable between both groups. IQ scores in participants with the disability revealed no significant relation with N1 and MMN measures, whereas the IQ scores tended to increase significantly as P3a latency reduced. These outcomes suggest that persons with intellectual disability might own discrete malfunctions for the two detector systems in auditory sensory-memory stage. Moreover, the processes following sensory memory might be partly related to a determinant of mental development.

  9. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  10. Auditory system dysfunction in Alzheimer disease and its prodromal states: A review.

    Science.gov (United States)

    Swords, Gabriel M; Nguyen, Lydia T; Mudar, Raksha A; Llano, Daniel A

    2018-04-06

    Recent findings suggest that both peripheral and central auditory system dysfunction occur in the prodromal stages of Alzheimer Disease (AD), and therefore may represent early indicators of the disease. In addition, loss of auditory function itself leads to communication difficulties, social isolation and poor quality of life for both patients with AD and their caregivers. Developing a greater understanding of auditory dysfunction in early AD may shed light on the mechanisms of disease progression and carry diagnostic and therapeutic importance. Herein, we review the literature on hearing abilities in AD and its prodromal stages investigated through methods such as pure-tone audiometry, dichotic listening tasks, and evoked response potentials. We propose that screening for peripheral and central auditory dysfunction in at-risk populations is a low-cost and effective means to identify early AD pathology and provides an entry point for therapeutic interventions that enhance the quality of life of AD patients. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Estimating individual listeners’ auditory-filter bandwidth in simultaneous and non-simultaneous masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Caminade, Sabine; Strelcyk, Olaf

    2010-01-01

    Frequency selectivity in the human auditory system is often measured using simultaneous masking of tones presented in notched noise. Based on such masking data, the equivalent rectangular bandwidth (ERB) of the auditory filters can be derived by applying the power spectrum model of masking....... Considering bandwidth estimates from previous studies based on forward masking, only average data across a number of subjects have been considered. The present study is concerned with bandwidth estimates in simultaneous and forward masking in individual normal-hearing subjects. In order to investigate...... the reliability of the individual estimates, a statistical resampling method is applied. It is demonstrated that a rather large set of experimental data is required to reliably estimate auditory filter bandwidth, particularly in the case of simultaneous masking. The poor overall reliability of the filter...

  12. A stroke patient with impairment of auditory sensory (echoic) memory.

    Science.gov (United States)

    Kojima, T; Karino, S; Yumoto, M; Funayama, M

    2014-04-01

    A 42-year-old man suffered damage to the left supra-sylvian areas due to a stroke and presented with verbal short-term memory (STM) deficits. He occasionally could not recall even a single syllable that he had heard one second before. A study of mismatch negativity using magnetoencephalography suggested that the duration of auditory sensory (echoic) memory traces was reduced on the affected side of the brain. His maximum digit span was four with auditory presentation (equivalent to the 1st percentile for normal subjects), whereas it was up to six with visual presentation (almost within the normal range). He simply showed partial recall in the digit span task, and there was no self correction or incorrect reproduction. From these findings, reduced echoic memory was thought to have affected his verbal short-term retention. Thus, the impairment of verbal short-term memory observed in this patient was "pure auditory" unlike previously reported patients with deficits of the phonological short-term store (STS), which is the next higher-order memory system. We report this case to present physiological and behavioral data suggesting impaired short-term storage of verbal information, and to demonstrate the influence of deterioration of echoic memory on verbal STM.

  13. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    DEFF Research Database (Denmark)

    Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.

    2017-01-01

    -spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...

  14. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    OpenAIRE

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants wer...

  15. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  16. Glycinergic Pathways of the Central Auditory System and Adjacent Reticular Formation of the Rat.

    Science.gov (United States)

    Hunter, Chyren

    The development of techniques to visualize and identify specific transmitters of neuronal circuits has stimulated work on the characterization of pathways in the rat central nervous system that utilize the inhibitory amino acid glycine as its neurotransmitter. Glycine is a major inhibitory transmitter in the spinal cord and brainstem of vertebrates where it satisfies the major criteria for neurotransmitter action. Some of these characteristics are: uneven distribution in brain, high affinity reuptake mechanisms, inhibitory neurophysiological actions on certain neuronal populations, uneven receptor distribution and the specific antagonism of its actions by the convulsant alkaloid strychnine. Behaviorally, antagonism of glycinergic neurotransmission in the medullary reticular formation is linked to the development of myoclonus and seizures which may be initiated by auditory as well as other stimuli. In the present study, decreases in the concentration of glycine as well as the density of glycine receptors in the medulla with aging were found and may be responsible for the lowered threshold for strychnine seizures observed in older rats. Neuroanatomical pathways in the central auditory system and medullary and pontine reticular formation (RF) were investigated using retrograde transport of tritiated glycine to identify glycinergic pathways; immunohistochemical techniques were used to corroborate the location of glycine neurons. Within the central auditory system, retrograde transport studies using tritiated glycine demonstrated an ipsilateral glycinergic pathway linking nuclei of the ascending auditory system. This pathway has its cell bodies in the medial nucleus of the trapezoid body (MNTB) and projects to the ventrocaudal division of the ventral nucleus of the lateral lemniscus (VLL). Collaterals of this glycinergic projection terminate in the ipsilateral lateral superior olive (LSO). Other glycinergic pathways found were afferent to the VLL and have their origin

  17. The multi-level impact of chronic intermittent hypoxia on central auditory processing.

    Science.gov (United States)

    Wong, Eddie; Yang, Bin; Du, Lida; Ho, Wai Hong; Lau, Condon; Ke, Ya; Chan, Ying Shing; Yung, Wing Ho; Wu, Ed X

    2017-08-01

    During hypoxia, the tissues do not obtain adequate oxygen. Chronic hypoxia can lead to many health problems. A relatively common cause of chronic hypoxia is sleep apnea. Sleep apnea is a sleep breathing disorder that affects 3-7% of the population. During sleep, the patient's breathing starts and stops. This can lead to hypertension, attention deficits, and hearing disorders. In this study, we apply an established chronic intermittent hypoxemia (CIH) model of sleep apnea to study its impact on auditory processing. Adult rats were reared for seven days during sleeping hours in a gas chamber with oxygen level cycled between 10% and 21% (normal atmosphere) every 90s. During awake hours, the subjects were housed in standard conditions with normal atmosphere. CIH treatment significantly reduces arterial oxygen partial pressure and oxygen saturation during sleeping hours (relative to controls). After treatment, subjects underwent functional magnetic resonance imaging (fMRI) with broadband sound stimulation. Responses are observed in major auditory centers in all subjects, including the auditory cortex (AC) and auditory midbrain. fMRI signals from the AC are statistically significantly increased after CIH by 0.13% in the contralateral hemisphere and 0.10% in the ipsilateral hemisphere. In contrast, signals from the lateral lemniscus of the midbrain are significantly reduced by 0.39%. Signals from the neighboring inferior colliculus of the midbrain are relatively unaffected. Chronic hypoxia affects multiple levels of the auditory system and these changes are likely related to hearing disorders associated with sleep apnea. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    Science.gov (United States)

    Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…

  19. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Procedures for central auditory processing screening in schoolchildren.

    Science.gov (United States)

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that

  1. Abnormal Auditory Brainstem Response (ABR Findings in a Near-Normal Hearing Child with Noonan Syndrome

    Directory of Open Access Journals (Sweden)

    Bahram Jalaei

    2017-01-01

    Full Text Available Introduction: Noonan syndrome (NS is a heterogeneous genetic disease that affects many parts of the body. It was named after Dr. Jacqueline Anne Noonan, a paediatric cardiologist.Case Report: We report audiological tests and auditory brainstem response (ABR findings in a 5-year old Malay boy with NS. Despite showing the marked signs of NS, the child could only produce a few meaningful words. Audiological tests found him to have bilateral mild conductive hearing loss at low frequencies. In ABR testing, despite having good waveform morphology, the results were atypical. Absolute latency of wave V was normal but interpeak latencies of wave’s I-V, I-II, II-III were prolonged. Interestingly, interpeak latency of waves III-V was abnormally shorter.Conclusion:Abnormal ABR results are possibly due to abnormal anatomical condition of brainstem and might contribute to speech delay.

  2. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    Science.gov (United States)

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  3. Auditory Selective Attention in Cerebral-Palsied Individuals.

    Science.gov (United States)

    Laraway, Lee Ann

    1985-01-01

    To examine differences between auditory selective attention abilities of normal and cerebral-palsied individuals, 23 cerebral-palsied and 23 normal subjects (5-21) were asked to repeat a series of 30 items in presence of intermittent white noise. Results indicated that cerebral-palsied individuals perform significantly more poorly when the…

  4. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  5. [A case of transient auditory agnosia and schizophrenia].

    Science.gov (United States)

    Kanzaki, Jin; Harada, Tatsuhiko; Kanzaki, Sho

    2011-03-01

    We report a case of transient functional auditory agnosia and schizophrenia and discuss their relationship. A 30-year-old woman with schizophrenia reporting bilateral hearing loss was found in history taking to be able to hear but could neither understand speech nor discriminate among environmental sounds. Audiometry clarified normal but low speech discrimination. Otoacoustic emission and auditory brainstem response were normal. Magnetic resonance imaging (MRI) elsewhere evidenced no abnormal findings. We assumed that taking care of her grandparents who had been discharged from the hospital had unduly stressed her, and her condition improved shortly after she stopped caring for them, returned home and started taking a minor tranquilizer.

  6. Serial auditory-evoked potentials in the diagnosis and monitoring of a child with Landau-Kleffner syndrome.

    Science.gov (United States)

    Plyler, Erin; Harkrider, Ashley W

    2013-01-01

    A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.

  7. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram

    Science.gov (United States)

    Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160

  8. Dissociation of Detection and Discrimination of Pure Tones following Bilateral Lesions of Auditory Cortex

    Science.gov (United States)

    Dykstra, Andrew R.; Koh, Christine K.; Braida, Louis D.; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed. PMID:22957087

  9. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Dykstra, Andrew R; Koh, Christine K; Braida, Louis D; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  10. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    Full Text Available It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB. The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  11. Binaural interaction in auditory evoked potentials: Brainstem, middle- and long-latency components

    OpenAIRE

    McPherson, DL; Starr, A

    1993-01-01

    Binaural interaction occurs in the auditory evoked potentials when the sum of the monaural auditory evoked potentials are not equivalent to the binaural evoked auditory potentials. Binaural interaction of the early- (0-10 ms), middle- (10-50 ms) and long-latency (50-200 ms) auditory evoked potentials was studied in 17 normal young adults. For the early components, binaural interaction was maximal at 7.35 ms accounting for a reduction of 21% of the amplitude of the binaural evoked potentials. ...

  12. Persian competing word test: Development and preliminary results in normal children

    Directory of Open Access Journals (Sweden)

    Mohammad Ebrahim Mahdavi

    2008-12-01

    Full Text Available Background and Aim: Assessment of central auditory processing skills needs various behavioral tests in format of a test battery. There is a few Persian speech tests for documenting central auditory processing disorders. The purpose of this study was developing a dichotic test formed of one-syllabic words suitable for evaluation of central auditory processing in Persian language children and reporting its preliminary results in a group of normal children.Materials and Methods: Persian words in competing manner test was developed utilizing most frequent monosyllabic words in children storybooks reported in the previous researches. The test was performed at MCL on forty-five normal children (39 right-handed and 6 left-handed aged 5-11 years. The children did not show any obvious problem in hearing, speech, language and learning. Free (n=28 and directed listening (n=17 tasks were investigated.Results: The results show that in directed listening task, there is significant advantage for performance of pre-cued ear relative to opposite side. Right ear advantage is evident in free recall condition. Average performance of the children in directed recall is significantly better than free recall. Average row score of the test increases with the children age.Conclusion: Persian words in competing manner test as a dichotic test, can show major characteristics of dichotic listening and effect of maturation of central auditory system on it in normal children.

  13. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  15. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L

    2017-05-24

    Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition. After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.

  16. Auditory Memory deficit in Elderly People with Hearing Loss

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2013-06-01

    Full Text Available Introduction: Hearing loss is one of the most common problems in elderly people. Functional side effects of hearing loss are various. Due to the fact that hearing loss is the common impairment in elderly people; the importance of its possible effects on auditory memory is undeniable. This study aims to focus on the hearing loss effects on auditory memory.   Materials and Methods: Dichotic Auditory Memory Test (DVMT was performed on 47 elderly people, aged 60 to 80; that were divided in two groups, the first group consisted of elderly people with hearing range of 24 normal and the second one consisted of 23 elderly people with bilateral symmetrical ranged from mild to moderate Sensorineural hearing loss in the high frequency due to aging in both genders.   Results: Significant difference was observed in DVMT between elderly people with normal hearing and those with hearing loss (P

  17. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. No counterpart of visual perceptual echoes in the auditory system.

    Directory of Open Access Journals (Sweden)

    Barkın İlhan

    Full Text Available It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ~10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ~10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8-13 Hz oscillations play a special role in vision.

  19. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  20. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  1. Auditory Hypersensitivity in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Lucker, Jay R.

    2013-01-01

    A review of records was completed to determine whether children with auditory hypersensitivities have difficulty tolerating loud sounds due to auditory-system factors or some other factors not directly involving the auditory system. Records of 150 children identified as not meeting autism spectrum disorders (ASD) criteria and another 50 meeting…

  2. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    Science.gov (United States)

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system

  3. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  4. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  5. Sensorimotor nucleus NIf is necessary for auditory processing but not vocal motor output in the avian song system.

    Science.gov (United States)

    Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F

    2005-04-01

    Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.

  6. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  7. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  8. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  9. Electrophysiological assessment of auditory processing disorder in children with non-syndromic cleft lip and/or palate.

    Science.gov (United States)

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-01-01

    Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P). However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft disorders. The present study utilized electrophysiological tests to assess the auditory status of a large group of children with NSCL/P, and investigated whether this group had less robust central auditory processing abilities compared to craniofacially normal children. 146 children with NSCL/P who had normal peripheral hearing thresholds, and 60 craniofacially normal children aged from 6 to 15 years, were recruited. Electrophysiological tests, including auditory brainstem response (ABR), P1-N1-P2 complex, and P300 component recording, were conducted. ABR and N1 wave latencies were significantly prolonged in children with NSCL/P. An atypical developmental trend was found for long latency potentials in children with cleft compared to control group children. Children with unilateral cleft lip and palate showed a greater level of abnormal results compared with other cleft subgroups, whereas the cleft lip subgroup had the most robust responses for all tests. Children with NSCL/P may have slower than normal neural transmission times between the peripheral auditory nerve and brainstem. Possible delayed development of myelination and synaptogenesis may also influence auditory processing function in this population. Present research outcomes were consistent with previous, smaller sample size, electrophysiological studies on infants and children with cleft lip/palate disorders. In view of the these findings, and reports of educational disadvantage associated with cleft disorders, further research

  10. Assessment of auditory impression of the coolness and warmness of automotive HVAC noise.

    Science.gov (United States)

    Nakagawa, Seiji; Hotehama, Takuya; Kamiya, Masaru

    2017-07-01

    Noise induced by a heating, ventilation and air conditioning (HVAC) system in a vehicle is an important factor that affects the comfort of the interior of a car cabin. Much effort has been devoted to reduce noise levels, however, there is a need for a new sound design that addresses the noise problem from a different point of view. In this study, focusing on the auditory impression of automotive HVAC noise concerning coolness and warmness, psychoacoustical listening tests were performed using a paired comparison technique under various conditions of room temperature. Five stimuli were synthesized by stretching the spectral envelopes of recorded automotive HVAC noise to assess the effect of the spectral centroid, and were presented to normal-hearing subjects. Results show that the spectral centroid significantly affects the auditory impression concerning coolness and warmness; a higher spectral centroid induces a cooler auditory impression regardless of the room temperature.

  11. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  12. Auditory reafferences: the influence of real-time feedback on movement control.

    Science.gov (United States)

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  13. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  14. Development of a wireless system for auditory neuroscience.

    Science.gov (United States)

    Lukes, A J; Lear, A T; Snider, R K

    2001-01-01

    In order to study how the auditory cortex extracts communication sounds in a realistic acoustic environment, a wireless system is being developed that will transmit acoustic as well as neural signals. The miniature transmitter will be capable of transmitting two acoustic signals with 37.5 KHz bandwidths (75 KHz sample rate) and 56 neural signals with bandwidths of 9.375 KHz (18.75 KHz sample rate). These signals will be time-division multiplexed into one high bandwidth signal with a 1.2 MHz sample rate. This high bandwidth signal will then be frequency modulated onto a 2.4 GHz carrier, which resides in the industrial, scientic, and medical (ISM) band that is designed for low-power short-range wireless applications. On the receiver side, the signal will be demodulated from the 2.4 GHz carrier and then digitized by an analog-to-digital (A/D) converter. The acoustic and neural signals will be digitally demultiplexed from the multiplexed signal into their respective channels. Oversampling (20 MHz) will allow the reconstruction of the multiplexing clock by a digital signal processor (DSP) that will perform frame and bit synchronization. A frame is a subset of the signal that contains all the channels and several channels tied high and low will signal the start of a frame. This technological development will bring two benefits to auditory neuroscience. It will allow simultaneous recording of many neurons that will permit studies of population codes. It will also allow neural functions to be determined in higher auditory areas by correlating neural and acoustic signals without apriori knowledge of the necessary stimuli.

  15. SALICYLATE INCREASES THE GAIN OF THE CENTRAL AUDITORY SYSTEM

    Science.gov (United States)

    Sun, W.; Lu, J.; Stolzberg, D.; Gray, L.; Deng, A.; Lobarinas, E.; Salvi, R. J.

    2009-01-01

    High doses of salicylate, the anti-inflammatory component of aspirin, induce transient tinnitus and hearing loss. Systemic injection of 250 mg/kg of salicylate, a dose that reliably induces tinnitus in rats, significantly reduced the sound evoked output of the rat cochlea. Paradoxically, salicylate significantly increased the amplitude of the sound-evoked field potential from the auditory cortex (AC) of conscious rats, but not the inferior colliculus (IC). When rats were anesthetized with isoflurane, which increases GABA-mediated inhibition, the salicylate-induced AC amplitude enhancement was abolished, whereas ketamine, which blocks N-methyl-d-aspartate receptors, further increased the salicylate-induced AC amplitude enhancement. Direct application of salicylate to the cochlea, however, reduced the response amplitude of the cochlea, IC and AC, suggesting the AC amplitude enhancement induced by systemic injection of salicylate does not originate from the cochlea. To identify a behavioral correlate of the salicylate-induced AC enhancement, the acoustic startle response was measured before and after salicylate treatment. Salicylate significantly increased the amplitude of the startle response. Collectively, these results suggest that high doses of salicylate increase the gain of the central auditory system, presumably by down-regulating GABA-mediated inhibition, leading to an exaggerated acoustic startle response. The enhanced startle response may be the behavioral correlate of hyperacusis that often accompanies tinnitus and hearing loss. Published by Elsevier Ltd on behalf of IBRO. PMID:19154777

  16. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  17. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    Science.gov (United States)

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback

  18. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  19. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  20. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  1. Effects of asymmetry and learning on phonotaxis in a robot based on the lizard auditory system

    DEFF Research Database (Denmark)

    Zhang, L.; Hallam, J.; Christensen-Dalsgaard, J.

    2012-01-01

    Lizards have strong directional hearing across a broad band of frequencies. The directionality can be attributed to the acoustical properties of the ear, especially the strong acoustical coupling of the two eardrums. The peripheral auditory system of the lizard has previously been modeled...... and magnitude of their intrinsic bias. To attain effective directional hearing, the bias in the peripheral system should be compensated. In this article, with the peripheral models, we design a decision model and a behavior model, a virtual robot, to simulate the auditory system of the lizard in software...

  2. Assessment of Working Memory in Individuals With Stuttering in Comparison With Individuals With Normal Fluency

    Directory of Open Access Journals (Sweden)

    Aiswarya Liz Varghese

    2018-05-01

    Full Text Available It is common in literature to relate stuttering with some other deficit that interferes with communicative functions. Working memory comprises the system of human memory dedicated to both temporary storages of phonological detail and allocation of cognitive resources necessary for forming lasting memories. In this study we have analyzed the performance of individuals with stuttering on various working memory tasks. The aim of study is to compare the working memory abilities in individuals with stuttering and individuals with normal fluency on various working memory tasks. A total of 30 individuals with stuttering and 30 individuals with normal fluency in the age range of 18 – 40 years participated in the study. The Working Memory domain will be assessed using The Manipal Manual for Cognitive Linguistic Abilities (MMCLA which consists of auditory word retrieval, auditory letter and number recall, auditory word list recall, auditory delayed sentence recall, visual practice recall, visual letter and number recall, visual word list recall and visual delayed sentence recall. Results revealed that the individuals with normal fluency had superior performance compared to the individuals with stuttering. Hence, it’s helpful to understand the involvement of working memory in stuttering and incorporate working memory training along with the conventional fluency therapy.

  3. Modeling speech imitation and ecological learning of auditory-motor maps

    Directory of Open Access Journals (Sweden)

    Claudia eCanevari

    2013-06-01

    Full Text Available Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers’ peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to recover motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy almost always outperforms all other strategies where only acoustics is taken into account.

  4. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  5. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Directory of Open Access Journals (Sweden)

    Sundeep Teki

    Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  6. Auditory cortical and hippocampal-system mismatch responses to duration deviants in urethane-anesthetized rats.

    Directory of Open Access Journals (Sweden)

    Timo Ruusuvirta

    Full Text Available Any change in the invariant aspects of the auditory environment is of potential importance. The human brain preattentively or automatically detects such changes. The mismatch negativity (MMN of event-related potentials (ERPs reflects this initial stage of auditory change detection. The origin of MMN is held to be cortical. The hippocampus is associated with a later generated P3a of ERPs reflecting involuntarily attention switches towards auditory changes that are high in magnitude. The evidence for this cortico-hippocampal dichotomy is scarce, however. To shed further light on this issue, auditory cortical and hippocampal-system (CA1, dentate gyrus, subiculum local-field potentials were recorded in urethane-anesthetized rats. A rare tone in duration (deviant was interspersed with a repeated tone (standard. Two standard-to-standard (SSI and standard-to-deviant (SDI intervals (200 ms vs. 500 ms were applied in different combinations to vary the observability of responses resembling MMN (mismatch responses. Mismatch responses were observed at 51.5-89 ms with the 500-ms SSI coupled with the 200-ms SDI but not with the three remaining combinations. Most importantly, the responses appeared in both the auditory-cortical and hippocampal locations. The findings suggest that the hippocampus may play a role in (cortical manifestation of MMN.

  7. Speech-evoked auditory brainstem responses in children with hearing loss.

    Science.gov (United States)

    Koravand, Amineh; Al Osman, Rida; Rivest, Véronique; Poulin, Catherine

    2017-08-01

    The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    DEFF Research Database (Denmark)

    Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.

    2017-01-01

    In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels...... and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...

  9. Estabilidade dos potenciais evocados auditivos em indivíduos adultos com audição normal Stability of auditory evoked potentials in adults with normal hearing

    Directory of Open Access Journals (Sweden)

    Carla Gentile Matas

    2011-03-01

    Full Text Available OBJETIVO: Avaliar a estabilidade dos parâmetros dos potenciais evocados auditivos em adultos normais. MÉTODOS: Foram submetidos à avaliação audiológica e eletrofisiológica (potencial evocado auditivo de tronco encefálico - PEATE, potencial evocado auditivo de média latência - PEAML e potencial cognitivo - P300 49 indivíduos normais, de 18 a 40 anos (25 do gênero feminino e 24 do gênero masculino. Realizou-se reavaliação três meses após a avaliação. RESULTADOS: Foram observadas diferenças entre os gêneros na avaliação para as latências das ondas III e V e interpicos I-III e I-V do PEATE e amplitude N2-P3 do P300. Não foram verificadas diferenças significativas para os parâmetros do PEATE, PEAML (latência das ondas Na, Pa e amplitude Na - Pa e P300 (latência da onda P300 entre os resultados obtidos na avaliação e reavaliação. CONCLUSÃO: Exceção feita à amplitude N2-P3, observou-se estabilidade dos parâmetros de PEATE, PEAML e P300 em adultos normais após período de três meses.PURPOSE: To evaluate the stability of parameters of auditory evoked potentials in normal adults. METHODS: Forty-nine normal subjects with ages from 18 to 40 years (25 females and 24 males were submitted to audiological and electrophysiological hearing evaluation (auditory brainstem response - ABR, middle latency response - MLR, and cognitive potential - P300. Subjects were reassessed three months after the initial evaluation. RESULTS: Significant differences were observed between genders regarding the wave latencies III and V and the interpeaks I-III and I-IV of ABR, and the amplitude N2-P3 of the P300. No differences were found between the results of initial and final assessments for the parameters of the ABR, MLR (Na, Pa latencies and Na-Pa amplitude and P300 (P300 latency. CONCLUSION: Except for the N2-P3 amplitude, it was observed stability of the parameters of ABR, MLR and P300 in normal adults after a period of three months.

  10. Left and right reaction time differences to the sound intensity in normal and AD/HD children.

    Science.gov (United States)

    Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza

    2017-06-01

    Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions. Copyright © 2017. Published by Elsevier B.V.

  11. Auditory Processing Testing: In the Booth versus Outside the Booth.

    Science.gov (United States)

    Lucker, Jay R

    2017-09-01

    Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done

  12. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  13. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  14. The attenuation of auditory neglect by implicit cues.

    Science.gov (United States)

    Coleman, A Rand; Williams, J Michael

    2006-09-01

    This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.

  15. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Large-scale synchronized activity during vocal deviance detection in the zebra finch auditory forebrain.

    Science.gov (United States)

    Beckers, Gabriël J L; Gahr, Manfred

    2012-08-01

    Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.

  17. Biological impact of music and software-based auditory training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based training can improve these biological signals. These findings of biological plasticity, in a variety of subject populations, relate to attention and auditory memory, and represent an integrated auditory system influenced by both sensation and cognition. Learning outcomes The reader will (1) understand that the auditory system is malleable to experience and training, (2) learn the ingredients necessary for auditory learning to successfully be applied to communication, (3) learn that the auditory brainstem response to complex sounds (cABR) is a window into the integrated auditory system, and (4) see examples of how cABR can be used to track the outcome of experience and training. PMID:22789822

  18. The effect of noise exposure during the developmental period on the function of the auditory system.

    Science.gov (United States)

    Bureš, Zbyněk; Popelář, Jiří; Syka, Josef

    2017-09-01

    Recently, there has been growing evidence that development and maturation of the auditory system depends substantially on the afferent activity supplying inputs to the developing centers. In cases when this activity is altered during early ontogeny as a consequence of, e.g., an unnatural acoustic environment or acoustic trauma, the structure and function of the auditory system may be severely affected. Pathological alterations may be found in populations of ribbon synapses of the inner hair cells, in the structure and function of neuronal circuits, or in auditory driven behavioral and psychophysical performance. Three characteristics of the developmental impairment are of key importance: first, they often persist to adulthood, permanently influencing the quality of life of the subject; second, their manifestations are different and sometimes even contradictory to the impairments induced by noise trauma in adulthood; third, they may be 'hidden' and difficult to diagnose by standard audiometric procedures used in clinical practice. This paper reviews the effects of early interventions to the auditory system, in particular, of sound exposure during ontogeny. We summarize the results of recent morphological, electrophysiological, and behavioral experiments, discuss the putative mechanisms and hypotheses, and draw possible consequences for human neonatal medicine and noise health. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The effect of viewing speech on auditory speech processing is different in the left and right hemispheres.

    Science.gov (United States)

    Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko

    2008-11-25

    We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.

  20. Auditory and Visual Electrophysiology of Deaf Children with Cochlear Implants: Implications for Cross-modal Plasticity.

    Science.gov (United States)

    Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon

    2017-01-01

    Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP

  1. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  2. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  3. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  4. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Automatic hearing loss detection system based on auditory brainstem response

    International Nuclear Information System (INIS)

    Aldonate, J; Mercuri, C; Reta, J; Biurrun, J; Bonell, C; Gentiletti, G; Escobar, S; Acevedo, R

    2007-01-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory

  6. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    OpenAIRE

    Zahra Shahidipour; Ahmad Geshani; Zahra Jafari; Shohreh Jalaie; Elham Khosravifard

    2014-01-01

    Background and Aim: Memory is one of the aspects of cognitive function which is widely affected among aged people. Since aging has different effects on different memorial systems and little studies have investigated auditory-verbal memory function in older adults using dichotic listening techniques, the purpose of this study was to evaluate the auditory-verbal memory function among old people using Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dic...

  7. A comparison of anesthetic agents and their effects on the response properties of the peripheral auditory system.

    Science.gov (United States)

    Dodd, F; Capranica, R R

    1992-10-01

    Anesthetic agents were compared in order to identify the most appropriate agent for use during surgery and electrophysiological recordings in the auditory system of the tokay gecko (Gekko gecko). Each agent was first screened for anesthetic and analgesic properties and, if found satisfactory, it was subsequently tested in electrophysiological recordings in the auditory nerve. The following anesthetic agents fulfilled our criteria and were selected for further screening: sodium pentobarbital (60 mg/kg); sodium pentobarbital (30 mg/kg) and oxymorphone (1 mg/kg); 3.2% isoflurane; ketamine (440 mg/kg) and oxymorphone (1 mg/kg). These agents were subsequently compared on the basis of their effect on standard response properties of auditory nerve fibers. Our results verified that different anesthetic agents can have significant effects on most of the parameters commonly used in describing the basic response properties of the auditory system in vertebrates. We therefore conclude from this study that the selection of an appropriate experimental protocol is critical and must take into consideration the effects of anesthesia on auditory responsiveness. In the tokay gecko, we recommend 3.2% isoflurane for general surgical procedures; and for electrophysiological recordings in the eighth nerve we recommend barbiturate anesthesia of appropriate dosage in combination if possible with an opioid agent to provide additional analgesic action.

  8. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  9. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Cochlear Damage Affects Neurotransmitter Chemistry in the Central Auditory System

    Directory of Open Access Journals (Sweden)

    Donald Albert Godfrey

    2014-11-01

    Full Text Available Tinnitus, the perception of a monotonous sound not actually present in the environment, affects nearly 20% of the population of the United States. Although there has been great progress in tinnitus research over the past 25 years, the neurochemical basis of tinnitus is still poorly understood. We review current research about the effects of various types of cochlear damage on the neurotransmitter chemistry in the central auditory system and document evidence that different changes in this chemistry can underlie similar behaviorally measured tinnitus symptoms. Most available data have been obtained from rodents following cochlear damage produced by cochlear ablation, loud sound, or ototoxic drugs. Effects on neurotransmitter systems have been measured as changes in neurotransmitter level, synthesis, release, uptake, and receptors. In this review, magnitudes of changes are presented for neurotransmitter-related amino acids, acetylcholine, and serotonin. A variety of effects have been found in these studies that may be related to animal model, survival time, type of cochlear damage, or methodology. The overall impression from the evidence presented is that any imbalance of neurotransmitter-related chemistry could disrupt auditory processing in such a way as to produce tinnitus.

  11. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  12. Neural Hyperactivity of the Central Auditory System in Response to Peripheral Damage

    Directory of Open Access Journals (Sweden)

    Yi Zhao

    2016-01-01

    Full Text Available It is increasingly appreciated that cochlear pathology is accompanied by adaptive responses in the central auditory system. The cause of cochlear pathology varies widely, and it seems that few commonalities can be drawn. In fact, despite intricate internal neuroplasticity and diverse external symptoms, several classical injury models provide a feasible path to locate responses to different peripheral cochlear lesions. In these cases, hair cell damage may lead to considerable hyperactivity in the central auditory pathways, mediated by a reduction in inhibition, which may underlie some clinical symptoms associated with hearing loss, such as tinnitus. Homeostatic plasticity, the most discussed and acknowledged mechanism in recent years, is most likely responsible for excited central activity following cochlear damage.

  13. The development of auditory skills in young children with Mondini dysplasia after cochlear implantation.

    Directory of Open Access Journals (Sweden)

    Xueqing Chen

    Full Text Available The aim of this study is to survey and compare the development of auditory skills in young children with Mondini dysplasia and profoundly-deaf young children with radiologically normal inner ears over a period of 3 years after cochlear implantation. A total of 545 young children (age 7 to 36 months with prelingual, severe to profound hearing loss participated in this study. All children received cochlear implantation. Based on whether or not there was a Mondini dysplasia as diagnosed with CT scanning, the subjects were divided into 2 groups: (A 514 young children with radiologically normal inner ears and (B 31 young children with Mondini dysplasia. The Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS was used to assess the children's auditory skills that include vocalization changes, spontaneous alerting to sounds in everyday living environments, and the ability to derive meaning from sounds. The assessment was performed prior to surgery and at 1, 3, 6, 9, 12, 24, and 36 months after implant device switch-on. The mean scores for overall auditory skills were not significantly different between groups A and B at pre-surgery, 1, 12, 24, and 36 months post-surgery, but were significantly different at 3, 6, and 9 months post-surgery. The mean scores for all auditory skills in children with Mondini dysplasia showed significant improvement over time. The mean scores for the three subcategories of auditory skills in children with Mondini dysplasia also showed significant differences at pre-surgery, 1, 3, 6, and 9 months, however, there were no significant differences at 12, 24, and 36 months. Overall, the auditory skills of young children with Mondini dysplasia developed rapidly after cochlear implantation, in a similar manner to that of young children with radiologically normal inner ears. Cochlear implantation is an effective intervention for young children with Mondini dysplasia.

  14. Increased intensity discrimination thresholds in tinnitus subjects with a normal audiogram

    DEFF Research Database (Denmark)

    Epp, Bastian; Hots, J.; Verhey, J. L.

    2012-01-01

    Recent auditory brain stem response measurements in tinnitus subjects with normal audiograms indicate the presence of hidden hearing loss that manifests as reduced neural output from the cochlea at high sound intensities, and results from mice suggest a link to deafferentation of auditory nerve...... fibers. As deafferentation would lead to deficits in hearing performance, the present study investigates whether tinnitus patients with normal hearing thresholds show impairment in intensity discrimination compared to an audiometrically matched control group. Intensity discrimination thresholds were...... significantly increased in the tinnitus frequency range, consistent with the hypothesis that auditory nerve fiber deafferentation is associated with tinnitus....

  15. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study

    OpenAIRE

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    PURPOSE: To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise - GIN) and IQ, attention, memory and age measurements. METHOD: Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and ...

  16. Musical experience, auditory perception and reading-related skills in children.

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.

  17. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  18. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  19. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  20. An assessment of auditory-guided locomotion in an obstacle circumvention task.

    Science.gov (United States)

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2016-06-01

    This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.

  1. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    Science.gov (United States)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  2. Auditory processing during deep propofol sedation and recovery from unconsciousness.

    Science.gov (United States)

    Koelsch, Stefan; Heinke, Wolfgang; Sammler, Daniela; Olthoff, Derk

    2006-08-01

    Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.

  3. Auditory evoked potentials in patients with major depressive disorder measured by Emotiv system.

    Science.gov (United States)

    Wang, Dongcui; Mo, Fongming; Zhang, Yangde; Yang, Chao; Liu, Jun; Chen, Zhencheng; Zhao, Jinfeng

    2015-01-01

    In a previous study (unpublished), Emotiv headset was validated for capturing event-related potentials (ERPs) from normal subjects. In the present follow-up study, the signal quality of Emotiv headset was tested by the accuracy rate of discriminating Major Depressive Disorder (MDD) patients from the normal subjects. ERPs of 22 MDD patients and 15 normal subjects were induced by an auditory oddball task and the amplitude of N1, N2 and P3 of ERP components were specifically analyzed. The features of ERPs were statistically investigated. It is found that Emotiv headset is capable of discriminating the abnormal N1, N2 and P3 components in MDD patients. Relief-F algorithm was applied to all features for feature selection. The selected features were then input to a linear discriminant analysis (LDA) classifier with leave-one-out cross-validation to characterize the ERP features of MDD. 127 possible combinations out of the selected 7 ERP features were classified using LDA. The best classification accuracy was achieved to be 89.66%. These results suggest that MDD patients are identifiable from normal subjects by ERPs measured by Emotiv headset.

  4. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    Science.gov (United States)

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier

  5. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  6. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection

    Directory of Open Access Journals (Sweden)

    Alessandra Spada Durante

    Full Text Available Abstract Introduction: The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. Objective: To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. Methods: The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group; and 31 adults with normal hearing (control group. An automated system of detection, analysis, and recording of cortical responses (HEARLab® was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000 Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing threshold (BT. The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. Results: The cortical

  7. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    Science.gov (United States)

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the

  8. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  9. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  10. Diagnosing Dyslexia: The Screening of Auditory Laterality.

    Science.gov (United States)

    Johansen, Kjeld

    A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…

  11. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  12. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  13. Adipose-derived stromal cells enhance auditory neuron survival in an animal model of sensory hearing loss.

    Science.gov (United States)

    Schendzielorz, Philipp; Vollmer, Maike; Rak, Kristen; Wiegner, Armin; Nada, Nashwa; Radeloff, Katrin; Hagen, Rudolf; Radeloff, Andreas

    2017-10-01

    A cochlear implant (CI) is an electronic prosthesis that can partially restore speech perception capabilities. Optimum information transfer from the cochlea to the central auditory system requires a proper functioning auditory nerve (AN) that is electrically stimulated by the device. In deafness, the lack of neurotrophic support, normally provided by the sensory cells of the inner ear, however, leads to gradual degeneration of auditory neurons with undesirable consequences for CI performance. We evaluated the potential of adipose-derived stromal cells (ASCs) that are known to produce neurotrophic factors to prevent neural degeneration in sensory hearing loss. For this, co-cultures of ASCs with auditory neurons have been studied, and autologous ASC transplantation has been performed in a guinea pig model of gentamicin-induced sensory hearing loss. In vitro ASCs were neuroprotective and considerably increased the neuritogenesis of auditory neurons. In vivo transplantation of ASCs into the scala tympani resulted in an enhanced survival of auditory neurons. Specifically, peripheral AN processes that are assumed to be the optimal activation site for CI stimulation and that are particularly vulnerable to hair cell loss showed a significantly higher survival rate in ASC-treated ears. ASC transplantation into the inner ear may restore neurotrophic support in sensory hearing loss and may help to improve CI performance by enhanced AN survival. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  14. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  15. Auditory phonological priming in children and adults during word repetition

    Science.gov (United States)

    Cleary, Miranda; Schwartz, Richard G.

    2004-05-01

    Short-term auditory phonological priming effects involve changes in the speed with which words are processed by a listener as a function of recent exposure to other similar-sounding words. Activation of phonological/lexical representations appears to persist beyond the immediate offset of a word, influencing subsequent processing. Priming effects are commonly cited as demonstrating concurrent activation of word/phonological candidates during word identification. Phonological priming is controversial, the direction of effects (facilitating versus slowing) varying with the prime-target relationship. In adults, it has repeatedly been demonstrated, however, that hearing a prime word that rhymes with the following target word (ISI=50 ms) decreases the time necessary to initiate repetition of the target, relative to when the prime and target have no phonemic overlap. Activation of phonological representations in children has not typically been studied using this paradigm, auditory-word + picture-naming tasks being used instead. The present study employed an auditory phonological priming paradigm being developed for use with normal-hearing and hearing-impaired children. Initial results from normal-hearing adults replicate previous reports of faster naming times for targets following a rhyming prime word than for targets following a prime having no phonemes in common. Results from normal-hearing children will also be reported. [Work supported by NIH-NIDCD T32DC000039.

  16. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  17. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    Science.gov (United States)

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  19. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  20. The role of auditory temporal cues in the fluency of stuttering adults

    OpenAIRE

    Furini, Juliana; Picoloto, Luana Altran; Marconato, Eduarda; Bohnen, Anelise Junqueira; Cardoso, Ana Claudia Vieira; Oliveira, Cristiane Moço Canhetti de

    2017-01-01

    ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF). Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG), and 15 without stuttering (Control Group - CG). The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds dela...

  1. Auditory attention: time of day and type of school

    Directory of Open Access Journals (Sweden)

    Picolini, Mirela Machado

    2010-06-01

    Full Text Available Introduction: The sustained auditory attention is crucial for the development of some communication skills and learning. Objective: To evaluate the effect of time of day and type of school attended by children in their ability to sustained auditory attention. Method: We performed a prospective study of 50 volunteer children of both sexes, aged 7 years, with normal hearing, no learning or behavioral problems and no complaints of attention. These participants underwent Ability Test of Sustained Auditory Attention (SAAAT. The performance was evaluated by total score and the decrease of vigilance. Statistical analysis was used to analysis of variance (ANOVA with significance level of 5% (p<0.05. Results: The result set by the normative test for the age group evaluated showed a statistically significant difference for the errors of inattention (p=0.041, p=0.027 and total error score (p=0.033, p=0.024, in different periods assessment and school types, respectively. Conclusion: Children evaluated in the afternoon and the children studying in public schools had a poorer performance on auditory attention sustained.

  2. Impact of Aging on the Auditory System and Related Cognitive Functions: A Narrative Review

    Directory of Open Access Journals (Sweden)

    Dona M. P. Jayakody

    2018-03-01

    Full Text Available Age-related hearing loss (ARHL, presbycusis, is a chronic health condition that affects approximately one-third of the world's population. The peripheral and central hearing alterations associated with age-related hearing loss have a profound impact on perception of verbal and non-verbal auditory stimuli. The high prevalence of hearing loss in the older adults corresponds to the increased frequency of dementia in this population. Therefore, researchers have focused their attention on age-related central effects that occur independent of the peripheral hearing loss as well as central effects of peripheral hearing loss and its association with cognitive decline and dementia. Here we review the current evidence for the age-related changes of the peripheral and central auditory system and the relationship between hearing loss and pathological cognitive decline and dementia. Furthermore, there is a paucity of evidence on the relationship between ARHL and established biomarkers of Alzheimer's disease, as the most common cause of dementia. Such studies are critical to be able to consider any causal relationship between dementia and ARHL. While this narrative review will examine the pathophysiological alterations in both the peripheral and central auditory system and its clinical implications, the question remains unanswered whether hearing loss causes cognitive impairment or vice versa.

  3. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    Science.gov (United States)

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere

  4. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  5. Effects of exposure to 2100 MHz GSM-like radiofrequency electromagnetic field on auditory system of rats

    Directory of Open Access Journals (Sweden)

    Metin Çeliker

    Full Text Available Abstract Introduction: The use of mobile phones has become widespread in recent years. Although beneficial from the communication viewpoint, the electromagnetic fields generated by mobile phones may cause unwanted biological changes in the human body. Objective: In this study, we aimed to evaluate the effects of 2100 MHz Global System for Mobile communication (GSM-like electromagnetic field, generated by an electromagnetic fields generator, on the auditory system of rats by using electrophysiological, histopathologic and immunohistochemical methods. Methods: Fourteen adult Wistar albino rats were included in the study. The rats were divided randomly into two groups of seven rats each. The study group was exposed continuously for 30 days to a 2100 MHz electromagnetic fields with a signal level (power of 5.4 dBm (3.47 mW to simulate the talk mode on a mobile phone. The control group was not exposed to the aforementioned electromagnetic fields. After 30 days, the Auditory Brainstem Responses of both groups were recorded and the rats were sacrificed. The cochlear nuclei were evaluated by histopathologic and immunohistochemical methods. Results: The Auditory Brainstem Responses records of the two groups did not differ significantly. The histopathologic analysis showed increased degeneration signs in the study group (p = 0.007. In addition, immunohistochemical analysis revealed increased apoptotic index in the study group compared to that in the control group (p = 0.002. Conclusion: The results support that long-term exposure to a GSM-like 2100 MHz electromagnetic fields causes an increase in neuronal degeneration and apoptosis in the auditory system.

  6. The auditory attention status in Iranian bilingual and monolingual people

    Directory of Open Access Journals (Sweden)

    Nayiere Mansoori

    2013-05-01

    Full Text Available Background and Aim: Bilingualism, as one of the discussing issues of psychology and linguistics, can influence the speech processing. Of several tests for assessing auditory processing, dichotic digit test has been designed to study divided auditory attention. Our study was performed to compare the auditory attention between Iranian bilingual and monolingual young adults. Methods: This cross-sectional study was conducted on 60 students including 30 Turkish-Persian bilinguals and 30 Persian monolinguals aged between 18 to 30 years in both genders. Dichotic digit test was performed on young individuals with normal peripheral hearing and right hand preference. Results: No significant correlation was found between the results of dichotic digit test of monolinguals and bilinguals (p=0.195, and also between the results of right and left ears in monolingual (p=0.460 and bilingual (p=0.054 groups. The mean score of women was significantly more than men (p=0.031. Conclusion: There was no significant difference between bilinguals and monolinguals in divided auditory attention; and it seems that acquisition of second language in lower ages has no noticeable effect on this type of auditory attention.

  7. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  8. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  9. [(1)H-MRS study of auditory cortex in patients with presbycusis].

    Science.gov (United States)

    Chen, Xian-ming; Dou, Xiao-qing; Liang, Yong-hui; Zhang, Li-wei; Luo, Bi-qiang; Deng, Yi-hong

    2012-10-01

    To study the metabolic changes of auditory cortex in patients with presbycusis by using proton magnetic resonance spectroscopy ((1)H-MRS). Ten normal hearing volunteers (youth group), 10 normal hearing of elderly (aged group) and 8 patients with presbycusis (presbycusis group) were checked with proton magnetic resonance spectroscopy. N-acetylaspartic acid (NAA), creatine (Cr), choline (Cho), γ-aminobutyric acid (GABA), glutamic acid (Glu) compound were measured. The differences between the groups were semi-quantitatively analyzed. When compared with youth group, reduced NAA/Cr, increased Cho/Cr were found in the aged group and presbycusis group (P presbycusis group and youth group (P 0.05). When compared with aged group, the metabolic changes of auditory cortex in patients with presbycusis were remarkable (P presbycusis.

  10. Adapting the Theory of Visual Attention (TVA) to model auditory attention

    DEFF Research Database (Denmark)

    Roberts, Katherine L.; Andersen, Tobias; Kyllingsbæk, Søren

    Mathematical and computational models have provided useful insights into normal and impaired visual attention, but less progress has been made in modelling auditory attention. We are developing a Theory of Auditory Attention (TAA), based on an influential visual model, the Theory of Visual...... Attention (TVA). We report that TVA provides a good fit to auditory data when the stimuli are closely matched to those used in visual studies. In the basic visual TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of letters (e...... the auditory data, producing good estimates of the rate at which information is encoded (C), the minimum exposure duration required for processing to begin (t0), and the relative attentional weight to targets versus distractors (α). Future work will address the issue of target-distractor confusion, and extend...

  11. Auditory and cognitive performance in elderly musicians and nonmusicians.

    Directory of Open Access Journals (Sweden)

    Massimo Grassi

    Full Text Available Musicians represent a model for examining brain and behavioral plasticity in terms of cognitive and auditory profile, but few studies have investigated whether elderly musicians have better auditory and cognitive abilities than nonmusicians. The aim of the present study was to examine whether being a professional musician attenuates the normal age-related changes in hearing and cognition. Elderly musicians still active in their profession were compared with nonmusicians on auditory performance (absolute threshold, frequency intensity, duration and spectral shape discrimination, gap and sinusoidal amplitude-modulation detection, and on simple (short-term memory and more complex and higher-order (working memory [WM] and visuospatial abilities cognitive tasks. The sample consisted of adults at least 65 years of age. The results showed that older musicians had similar absolute thresholds but better supra-threshold discrimination abilities than nonmusicians in four of the six auditory tasks administered. They also had a better WM performance, and stronger visuospatial abilities than nonmusicians. No differences were found between the two groups' short-term memory. Frequency discrimination and gap detection for the auditory measures, and WM complex span tasks and one of the visuospatial tasks for the cognitive ones proved to be very good classifiers of the musicians. These findings suggest that life-long music training may be associated with enhanced auditory and cognitive performance, including complex cognitive skills, in advanced age. However, whether this music training represents a protective factor or not needs further investigation.

  12. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  14. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  15. Benefits and detriments of unilateral cochlear implant use on bilateral auditory development in children who are deaf

    Directory of Open Access Journals (Sweden)

    Karen A. Gordon

    2013-10-01

    Full Text Available We have explored both the benefits and detriments of providing electrical input through a cochlear implant in one ear to the auditory system of young children. A cochlear implant delivers electrical pulses to stimulate the auditory nerve, providing children who are deaf with access to sound. The goals of implantation are to restrict reorganization of the deprived immature auditory brain and promote development of hearing and spoken language. It is clear that limiting the duration of deprivation is a key factor. Additional considerations are the onset, etiology, and use of residual hearing as each of these can have unique effects on auditory development in the pre-implant period. New findings show that many children receiving unilateral cochlear implants are developing mature-like brainstem and thalamo-cortical responses to sound with long term use despite these sources of variability; however, there remain considerable abnormalities in cortical function. The most apparent, determined by implanting the other ear and measuring responses to acute stimulation, is a loss of normal cortical response from the deprived ear. Recent data reveal that this can be avoided in children by early implantation of both ears simultaneously or with limited delay. We conclude that auditory development requires input early in development and from both ears.

  16. A novel 9-class auditory ERP paradigm driving a predictive text entry system

    Directory of Open Access Journals (Sweden)

    Johannes eHöhne

    2011-08-01

    Full Text Available Brain-Computer Interfaces (BCIs based on Event Related Potentials (ERPs strive for offering communication pathways which are independent of muscle activity. While most visual ERP-based BCI paradigms require good control of the user's gaze direction, auditory BCI paradigms overcome this restriction. The present work proposes a novel approach using Auditory Evoked Potentials (AEP for the example of a multiclass text spelling application. To control the ERP speller, BCI users focus their attention to two-dimensional auditory stimuli that vary in both, pitch (high/medium/low and direction (left/middle/right and that are presented via headphones. The resulting nine different control signals are exploited to drive a predictive text entry system. It enables the user to spell a letter by a single 9-class decision plus two additional decisions to confirm a spelled word.This paradigm - called PASS2D - was investigated in an online study with twelve healthy participants. Users spelled with more than 0.8 characters per minute on average (3.4 bits per minute which makes PASS2D a competitive method. It could enrich the toolbox of existing ERP paradigms for BCI end users like late-stage ALS patients.

  17. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  18. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Random Gap Detection Test (RGDT) performance of individuals with central auditory processing disorders from 5 to 25 years of age.

    Science.gov (United States)

    Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo

    2012-02-01

    The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because

  20. The assessment of auditory function in CSWS: lessons from long-term outcome.

    Science.gov (United States)

    Metz-Lutz, Marie-Noëlle

    2009-08-01

    In Landau-Kleffner syndrome (LKS), the prominent and often first symptom is auditory verbal agnosia, which may affect nonverbal sounds. It was early suggested that the subsequent decline of speech expression might result from defective auditory analysis of the patient's own speech. Indeed, despite normal hearing levels, the children behave as if they were deaf, and very rapidly speech expression deteriorates and leads to the receptive aphasia typical of LKS. The association of auditory agnosia more or less restricted to speech with severe language decay prompted numerous studies aimed at specifying the defect in auditory processing and its pathophysiology. Long-term follow-up studies have addressed the issue of the outcome of verbal auditory processing and the development of verbal working memory capacities following the deprivation of phonologic input during the critical period of language development. Based on a review of neurophysiologic and neuropsychological studies of auditory and phonologic disorders published these last 20 years, we discuss the association of verbal agnosia and speech production decay, and try to explain the phonologic working memory deficit in the late outcome of LKS within the Hickok and Poeppel dual-stream model of speech processing.

  1. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    Science.gov (United States)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  2. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    Science.gov (United States)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  3. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  4. Direct recordings from the auditory cortex in a cochlear implant user.

    Science.gov (United States)

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.

  5. Oscillatory Mechanisms of Stimulus Processing and Selection in the Visual and Auditory Systems: State-of-the-Art, Speculations and Suggestions

    Directory of Open Access Journals (Sweden)

    Benedikt Zoefel

    2017-05-01

    Full Text Available All sensory systems need to continuously prioritize and select incoming stimuli in order to avoid overflow or interference, and provide a structure to the brain's input. However, the characteristics of this input differ across sensory systems; therefore, and as a direct consequence, each sensory system might have developed specialized strategies to cope with the continuous stream of incoming information. Neural oscillations are intimately connected with this selection process, as they can be used by the brain to rhythmically amplify or attenuate input and therefore represent an optimal tool for stimulus selection. In this paper, we focus on oscillatory processes for stimulus selection in the visual and auditory systems. We point out both commonalities and differences between the two systems and develop several hypotheses, inspired by recently published findings: (1 The rhythmic component in its input is crucial for the auditory, but not for the visual system. The alignment between oscillatory phase and rhythmic input (phase entrainment is therefore an integral part of stimulus selection in the auditory system whereas the visual system merely adjusts its phase to upcoming events, without the need for any rhythmic component. (2 When input is unpredictable, the visual system can maintain its oscillatory sampling, whereas the auditory system switches to a different, potentially internally oriented, “mode” of processing that might be characterized by alpha oscillations. (3 Visual alpha can be divided into a faster occipital alpha (10 Hz and a slower frontal alpha (7 Hz that critically depends on attention.

  6. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  7. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  8. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  9. Auditory-Perceptual Evaluation of Dysphonia: A Comparison Between Narrow and Broad Terminology Systems

    DEFF Research Database (Denmark)

    Iwarsson, Jenny

    2017-01-01

    of the terminology used in the multiparameter Danish Dysphonia Assessment (DDA) approach into the five-parameter GRBAS system. Methods. Voice samples illustrating type and grade of the voice qualities included in DDA were rated by five speech language pathologists using the GRBAS system with the aim of estimating...... terms and antagonists, reflecting muscular hypo- and hyperfunction. Key Words: Auditory-perceptual voice analysis–Dysphonia–GRBAS–Listening test–Voice ratings....

  10. Influences of multiple memory systems on auditory mental image acuity.

    Science.gov (United States)

    Navarro Cebrian, Ana; Janata, Petr

    2010-05-01

    The influence of different memory systems and associated attentional processes on the acuity of auditory images, formed for the purpose of making intonation judgments, was examined across three experiments using three different task types (cued-attention, imagery, and two-tone discrimination). In experiment 1 the influence of implicit long-term memory for musical scale structure was manipulated by varying the scale degree (leading tone versus tonic) of the probe note about which a judgment had to be made. In experiments 2 and 3 the ability of short-term absolute pitch knowledge to develop was manipulated by presenting blocks of trials in the same key or in seven different keys. The acuity of auditory images depended on all of these manipulations. Within individual listeners, thresholds in the two-tone discrimination and cued-attention conditions were closely related. In many listeners, cued-attention thresholds were similar to thresholds in the imagery condition, and depended on the amount of training individual listeners had in playing a musical instrument. The results indicate that mental images formed at a sensory/cognitive interface for the purpose of making perceptual decisions are highly malleable.

  11. Prediction of consonant recognition in quiet for listeners with normal and impaired hearing using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas

    2014-03-01

    Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.

  12. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing

    OpenAIRE

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise rati...

  13. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  14. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  16. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    Science.gov (United States)

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  17. Entrainment to an auditory signal: Is attention involved?

    NARCIS (Netherlands)

    Kunert, R.; Jongman, S.R.

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of

  18. Performance on Paced Auditory Serial Addition Test and cerebral blood flow in multiple sclerosis

    NARCIS (Netherlands)

    D'haeseleer, M.; Steen, C.; Hoogduin, J. M.; van Osch, M. J. P.; Fierens, Y.; Cambron, M.; Koch, M. W.; De Keyser, J.

    BackgroundTo assess the relationship between performance on the Paced Auditory Serial Addition Test (PASAT) and both cerebral blood flow (CBF) and axonal metabolic integrity in normal appearing white matter (NAWM) of the centrum semiovale in patients with multiple sclerosis (MS). MethodsNormal

  19. Effect of hearing loss on semantic access by auditory and audiovisual speech in children.

    Science.gov (United States)

    Jerger, Susan; Tye-Murray, Nancy; Damian, Markus F; Abdi, Hervé

    2013-01-01

    This research studied whether the mode of input (auditory versus audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI). Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with the authors' new multimodal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related versus unrelated to the pictures (e.g., picture distractor of dog-bear versus dog-cheese, respectively). In children with NH, picture-naming times were slower in the presence of semantically related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response (the lexical selection by competition hypothesis). Recently, a modification of the lexical selection by competition hypothesis, called the competition threshold (CT) hypothesis, proposed that (1) the competition between the picture-distractor entries is determined by a threshold, and (2) distractors with experimentally reduced fidelity cannot reach the CT. Thus, semantically related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically related versus unrelated distractors). Facilitation occurs because the activation level of the semantically related distractor with reduced fidelity (1) is not sufficient to exceed the CT and produce interference but (2) is sufficient to activate its concept, which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multimodal picture word task allowed us

  20. Sleep extension normalizes ERP of waking auditory sensory gating in healthy habitually short sleeping individuals.

    Science.gov (United States)

    Gumenyuk, Valentina; Korzyukov, Oleg; Roth, Thomas; Bowyer, Susan M; Drake, Christopher L

    2013-01-01

    Chronic sleep loss has been associated with increased daytime sleepiness, as well as impairments in memory and attentional processes. In the present study, we evaluated the neuronal changes of a pre-attentive process of wake auditory sensory gating, measured by brain event-related potential (ERP)--P50 in eight normal sleepers (NS) (habitual total sleep time (TST) 7 h 32 m) vs. eight chronic short sleeping individuals (SS) (habitual TST ≤6 h). To evaluate the effect of sleep extension on sensory gating, the extended sleep condition was performed in chronic short sleeping individuals. Thus, one week of time in bed (6 h 11 m) corresponding to habitual short sleep (hSS), and one week of extended time (∼ 8 h 25 m) in bed corresponding to extended sleep (eSS), were counterbalanced in the SS group. The gating ERP assessment was performed on the last day after each sleep condition week (normal sleep and habitual short and extended sleep), and was separated by one week with habitual total sleep time and monitored by a sleep diary. We found that amplitude of gating was lower in SS group compared to that in NS group (0.3 µV vs. 1.2 µV, at Cz electrode respectively). The results of the group × laterality interaction showed that the reduction of gating amplitude in the SS group was due to lower amplitude over the left hemisphere and central-midline sites relative to that in the NS group. After sleep extension the amplitude of gating increased in chronic short sleeping individuals relative to their habitual short sleep condition. The sleep condition × frontality interaction analysis confirmed that sleep extension significantly increased the amplitude of gating over frontal and central brain areas compared to parietal brain areas.

  1. Sleep extension normalizes ERP of waking auditory sensory gating in healthy habitually short sleeping individuals.

    Directory of Open Access Journals (Sweden)

    Valentina Gumenyuk

    Full Text Available Chronic sleep loss has been associated with increased daytime sleepiness, as well as impairments in memory and attentional processes. In the present study, we evaluated the neuronal changes of a pre-attentive process of wake auditory sensory gating, measured by brain event-related potential (ERP--P50 in eight normal sleepers (NS (habitual total sleep time (TST 7 h 32 m vs. eight chronic short sleeping individuals (SS (habitual TST ≤6 h. To evaluate the effect of sleep extension on sensory gating, the extended sleep condition was performed in chronic short sleeping individuals. Thus, one week of time in bed (6 h 11 m corresponding to habitual short sleep (hSS, and one week of extended time (∼ 8 h 25 m in bed corresponding to extended sleep (eSS, were counterbalanced in the SS group. The gating ERP assessment was performed on the last day after each sleep condition week (normal sleep and habitual short and extended sleep, and was separated by one week with habitual total sleep time and monitored by a sleep diary. We found that amplitude of gating was lower in SS group compared to that in NS group (0.3 µV vs. 1.2 µV, at Cz electrode respectively. The results of the group × laterality interaction showed that the reduction of gating amplitude in the SS group was due to lower amplitude over the left hemisphere and central-midline sites relative to that in the NS group. After sleep extension the amplitude of gating increased in chronic short sleeping individuals relative to their habitual short sleep condition. The sleep condition × frontality interaction analysis confirmed that sleep extension significantly increased the amplitude of gating over frontal and central brain areas compared to parietal brain areas.

  2. Usage of drip drops as stimuli in an auditory P300 BCI paradigm.

    Science.gov (United States)

    Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu

    2018-02-01

    Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p  < 0.05, Wilcoxon signed test; p  < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p  < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.

  3. Auditory Brain Stem Processing in Reptiles and Amphibians: Roles of Coupled Ears

    DEFF Research Database (Denmark)

    Willis, Katie L.; Christensen-Dalsgaard, Jakob; Carr, Catherine

    2014-01-01

    Comparative approaches to the auditory system have yielded great insight into the evolution of sound localization circuits, particularly within the nonmammalian tetrapods. The fossil record demonstrates multiple appearances of tympanic hearing, and examination of the auditory brain stem of various...... groups can reveal the organizing effects of the ear across taxa. If the peripheral structures have a strongly organizing influence on the neural structures, then homologous neural structures should be observed only in groups with a homologous tympanic ear. Therefore, the central auditory systems...... of anurans (frogs), reptiles (including birds), and mammals should all be more similar within each group than among the groups. Although there is large variation in the peripheral auditory system, there is evidence that auditory brain stem nuclei in tetrapods are homologous and have similar functions among...

  4. Auditory memory can be object based.

    Science.gov (United States)

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  5. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    Science.gov (United States)

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    Directory of Open Access Journals (Sweden)

    Jana B. Frtusova

    2016-04-01

    Full Text Available This study examined the effect of auditory-visual (AV speech stimuli on working memory in hearing impaired participants (HIP in comparison to age- and education-matched normal elderly controls (NEC. Participants completed a working memory n-back task (0- to 2-back in which sequences of digits were presented in visual-only (i.e., speech-reading, auditory-only (A-only, and AV conditions. Auditory event-related potentials (ERP were collected to assess the relationship between perceptual and working memory processing. The behavioural results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the HIP group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the HIP group showed a more robust AV benefit; however, the NECs showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the HIP to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.

  7. Cardiac autonomic regulation during exposure to auditory stimulation with classical baroque or heavy metal music of different intensities.

    Science.gov (United States)

    Amaral, Joice A T; Nogueira, Marcela L; Roque, Adriano L; Guida, Heraldo L; De Abreu, Luiz Carlos; Raimundo, Rodrigo Daminello; Vanderlei, Luiz Carlos M; Ribeiro, Vivian L; Ferreira, Celso; Valenti, Vitor E

    2014-03-01

    The effects of chronic music auditory stimulation on the cardiovascular system have been investigated in the literature. However, data regarding the acute effects of different styles of music on cardiac autonomic regulation are lacking. The literature has indicated that auditory stimulation with white noise above 50 dB induces cardiac responses. We aimed to evaluate the acute effects of classical baroque and heavy metal music of different intensities on cardiac autonomic regulation. The study was performed in 16 healthy men aged 18-25 years. All procedures were performed in the same soundproof room. We analyzed heart rate variability (HRV) in time (standard deviation of normal-to-normal R-R intervals [SDNN], root-mean square of differences [RMSSD] and percentage of adjacent NN intervals with a difference of duration greater than 50 ms [pNN50]) and frequency (low frequency [LF], high frequency [HF] and LF/HF ratio) domains. HRV was recorded at rest for 10 minutes. Subsequently, the volunteers were exposed to one of the two musical styles (classical baroque or heavy metal music) for five minutes through an earphone, followed by a five-minute period of rest, and then they were exposed to the other style for another five minutes. The subjects were exposed to three equivalent sound levels (60-70dB, 70-80dB and 80-90dB). The sequence of songs was randomized for each individual. Auditory stimulation with heavy metal music did not influence HRV indices in the time and frequency domains in the three equivalent sound level ranges. The same was observed with classical baroque musical auditory stimulation with the three equivalent sound level ranges. Musical auditory stimulation of different intensities did not influence cardiac autonomic regulation in men.

  8. Prevalence of Auditory Neuropathy in a Population of Children with Severe to Profound Hearing Loss

    Directory of Open Access Journals (Sweden)

    Nader Saki

    2013-04-01

    Full Text Available Background: The purpose of this investigation is to determine auditory neuropathy in the students with severe to profound hearing losses in Ahwaz.Materials and Methods: In this cross-sectional study, 212 children of 7-11 year old with severe to profound hearing loss performed ordinary audiometric evaluations as well as ABR and OAE. The patients with normal DPOAE who had no record of acoustic reflex having normal ABR, were considered as the patients with auditory neuropathy. Results: The neuropathic complication found in 14 children was appeared in 8 ones as one-sided (57.14% and in 6 ones (42.86% as two-sided. 68% of the patients as diagnosed had a very low Speech Discrimination Score (SDS.Conclusion: we must be very vigilant in auditory neuropathy diagnosis for the purpose to be successful in appropriate treatment of severe to profound hearing losses.

  9. Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants

    Science.gov (United States)

    Dorman, Michael F.; Liss, Julie; Wang, Shuai; Berisha, Visar; Ludwig, Cimarron; Natale, Sarah Cook

    2016-01-01

    Purpose: Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method: Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results: (a) Most CI users report that most of the time, they have access to both A and V…

  10. Multiple benefits of personal FM system use by children with auditory processing disorder (APD).

    Science.gov (United States)

    Johnston, Kristin N; John, Andrew B; Kreisman, Nicole V; Hall, James W; Crandell, Carl C

    2009-01-01

    Children with auditory processing disorders (APD) were fitted with Phonak EduLink FM devices for home and classroom use. Baseline measures of the children with APD, prior to FM use, documented significantly lower speech-perception scores, evidence of decreased academic performance, and psychosocial problems in comparison to an age- and gender-matched control group. Repeated measures during the school year demonstrated speech-perception improvement in noisy classroom environments as well as significant academic and psychosocial benefits. Compared with the control group, the children with APD showed greater speech-perception advantage with FM technology. Notably, after prolonged FM use, even unaided (no FM device) speech-perception performance was improved in the children with APD, suggesting the possibility of fundamentally enhanced auditory system function.

  11. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  12. Auditory alert systems with enhanced detectability

    Science.gov (United States)

    Begault, Durand R. (Inventor)

    2008-01-01

    Methods and systems for distinguishing an auditory alert signal from a background of one or more non-alert signals. In a first embodiment, a prefix signal, associated with an existing alert signal, is provided that has a signal component in each of three or more selected frequency ranges, with each signal component in each of three or more selected level at least 3-10 dB above an estimated background (non-alert) level in that frequency range. The alert signal may be chirped within one or more frequency bands. In another embodiment, an alert signal moves, continuously or discontinuously, from one location to another over a short time interval, introducing a perceived spatial modulation or jitter. In another embodiment, a weighted sum of background signals adjacent to each ear is formed, and the weighted sum is delivered to each ear as a uniform background; a distinguishable alert signal is presented on top of this weighted sum signal at one ear, or distinguishable first and second alert signals are presented at two ears of a subject.

  13. Auditory changes in acromegaly.

    Science.gov (United States)

    Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E

    2017-06-01

    The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.

  14. Behavioral Signs of (Central) Auditory Processing Disorder in Children With Nonsyndromic Cleft Lip and/or Palate: A Parental Questionnaire Approach.

    Science.gov (United States)

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-03-01

    Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an

  15. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Bleichner, Martin G; Debener, Stefan

    2016-01-01

    Cochlear implant (CI) users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH) controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users' speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS). Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  16. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  17. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    Science.gov (United States)

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater

  18. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  19. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  20. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...

  1. Quantifying the impact on navigation performance in visually impaired: Auditory information loss versus information gain enabled through electronic travel aids.

    Directory of Open Access Journals (Sweden)

    Alex Kreilinger

    Full Text Available This study's purpose was to analyze and quantify the impact of auditory information loss versus information gain provided by electronic travel aids (ETAs on navigation performance in people with low vision. Navigation performance of ten subjects (age: 54.9±11.2 years with visual acuities >1.0 LogMAR was assessed via the Graz Mobility Test (GMT. Subjects passed through a maze in three different modalities: 'Normal' with visual and auditory information available, 'Auditory Information Loss' with artificially reduced hearing (leaving only visual information, and 'ETA' with a vibrating ETA based on ultrasonic waves, thereby facilitating visual, auditory, and tactile information. Main performance measures comprised passage time and number of contacts. Additionally, head tracking was used to relate head movements to motion direction. When comparing 'Auditory Information Loss' to 'Normal', subjects needed significantly more time (p<0.001, made more contacts (p<0.001, had higher relative viewing angles (p = 0.002, and a higher percentage of orientation losses (p = 0.011. The only significant difference when comparing 'ETA' to 'Normal' was a reduced number of contacts (p<0.001. Our study provides objective, quantifiable measures of the impact of reduced hearing on the navigation performance in low vision subjects. Significant effects of 'Auditory Information Loss' were found for all measures; for example, passage time increased by 17.4%. These findings show that low vision subjects rely on auditory information for navigation. In contrast, the impact of the ETA was not significant but further analysis of head movements revealed two different coping strategies: half of the subjects used the ETA to increase speed, whereas the other half aimed at avoiding contacts.

  2. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  3. A Time-Frequency Auditory Model Using Wavelet Packets

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1996-01-01

    A time-frequency auditory model is presented. The model uses the wavelet packet analysis as the preprocessor. The auditory filters are modelled by the rounded exponential filters, and the excitation is smoothed by a window function. By comparing time-frequency excitation patterns it is shown...... that the change in the time-frequency excitation pattern introduced when a test tone at masked threshold is added to the masker is approximately equal to 7 dB for all types of maskers. The classic detection ratio therefore overrates the detection efficiency of the auditory system....

  4. Auditory adaptation testing as a tool for investigating tinnitus origin: two patients with vestibular schwannoma.

    Science.gov (United States)

    Silverman, Carol A; Silman, Shlomo; Emmer, Michele B

    2017-06-01

    To enhance the understanding of tinnitus origin by disseminating two case studies of vestibular schwannoma (VS) involving behavioural auditory adaptation testing (AAT). Retrospective case study. Two adults who presented with unilateral, non-pulsatile subjective tinnitus and bilateral normal-hearing sensitivity. At the initial evaluation, the otolaryngologic and audiologic findings were unremarkable, bilaterally. Upon retest, years later, VS was identified. At retest, the tinnitus disappeared in one patient and was slightly attenuated in the other patient. In the former, the results of AAT were positive for left retrocochlear pathology; in the latter, the results were negative for the left ear although a moderate degree of auditory adaptation was present despite bilateral normal-hearing sensitivity. Imaging revealed a small VS in both patients, confirmed surgically. Behavioural AAT in patients with tinnitus furnishes a useful tool for exploring tinnitus origin. Decrease or disappearance of tinnitus in patients with auditory adaptation suggests that the tinnitus generator is the cochlea or the cochlear nerve adjacent to the cochlea. Patients with unilateral tinnitus and bilateral, symmetric, normal-hearing thresholds, absent other audiovestibular symptoms, should be routinely monitored through otolaryngologic and audiologic re-evaluations. Tinnitus decrease or disappearance may constitute a red flag for retrocochlear pathology.

  5. The Effect of Noise on the Relationship between Auditory Working Memory and Comprehension in School-Age Children

    Science.gov (United States)

    Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.

    2015-01-01

    Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…

  6. Prevalence of auditory changes in newborns in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Guimarães, Valeriana de Castro

    2012-01-01

    Full Text Available Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4% had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5% children who had appeared in the retest, 8 (30.8% had remained with absence and had been directed to the Otolaryngologist. Five (55.5% had appeared and had been examined by the doctor. Of these, 3 (75.0% had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE. Of the total of studied children, 198 (87.6% had had presence of emissions in one of the tests and, 2 (0.9% with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection.

  7. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    A great deal of the processing of incoming sounds to the auditory system occurs within the cochlear. The organ of Corti within the cochlea has differing mechanical properties along its length that broadly gives rise to frequency selectivity. Its stiffness is at maximum at the base and decreases...... relation between frequency and travel time in the cochlea defines the cochlear delay. This delay is directly associated with the signal analysis occurring in the inner ear and is therefore of primary interest to get a better knowledge of this organ. It is possible to estimate the cochlear delay by direct...... and invasive techniques, but these disrupt the normal functioning of the cochlea and are usually conducted in animals. In order to obtain an estimate of the cochlear delay that is closer to the normally functioning human cochlea, the present project investigates non-invasive methods in normal hearing adults...

  8. Auditory processing efficiency deficits in children with developmental language impairments

    Science.gov (United States)

    Hartley, Douglas E. H.; Moore, David R.

    2002-12-01

    The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.

  9. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    Science.gov (United States)

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  10. Evaluation of peripheral compression and auditory nerve fiber intensity coding using auditory steady-state responses

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; M. Harte, James; Epp, Bastian

    2015-01-01

    . Evaluation of these properties provides information about the health state of the system. It has been shown that a loss of outer hair cells leads to a reduction in peripheral compression. It has also recently been shown in animal studies that noise over-exposure, producing temporary threshold shifts, can....... The results indicate that the slope of the ASSR level growth function can be used to estimate peripheral compression simultaneously at four frequencies below 60 dB SPL, while the slope above 60 dB SPL may provide information about the integrity of intensity coding of low-SR fibers.......The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and, particularly, of outer-hair cell function. Another ability of the healthy auditory system is to enable communication in acoustical environments with high-level background noises...

  11. Modifying Directionality through Auditory System Scaling in a Robotic Lizard

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    2010-01-01

    The peripheral auditory system of a lizard is strongly directional. This directionality is created by acoustical coupling of the two eardrums and is strongly dependent on characteristics of the middle ear, such as interaural distance, resonance frequency of the middle ear cavity and of the tympanum....... Therefore, directionality should be strongly influenced by their scaling. In the present study, we have exploited an FPGA–based mobile robot based on a model of the lizard ear to investigate the influence of scaling on the directional response, in terms of the robot’s performance in a phonotaxis task...

  12. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Science.gov (United States)

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2017-01-01

    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  13. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation.

    Directory of Open Access Journals (Sweden)

    Andrew J Kolarik

    Full Text Available Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation and tactile (using a sensory substitution device, SSD guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.

  14. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  16. Assessment of auditory cortical function in cochlear implant patients using 15O PET

    International Nuclear Information System (INIS)

    Young, J.P.; O'Sullivan, B.T.; Gibson, W.P.; Sefton, A.E.; Mitchell, T.E.; Sanli, H.; Cervantes, R.; Withall, A.; Royal Prince Alfred Hospital, Sydney,

    1998-01-01

    Full text: Cochlear implantation has been an extraordinarily successful method of restoring hearing and the potential for full language development in pre-lingually and post-lingually deaf individuals (Gibson 1996). Post-lingually deaf patients, who develop their hearing loss later in life, respond best to cochlear implantation within the first few years of their deafness, but are less responsive to implantation after several years of deafness (Gibson 1996). In pre-lingually deaf children, cochlear implantation is most effect in allowing the full development language skills when performed within a critical period, in the first 8 years of life. These clinical observations suggest considerable neural plasticity of the human auditory cortex in acquiring and retaining language skills (Gibson 1996, Buchwald 1990). Currently, electrocochleography is used to determine the integrity of the auditory pathways to the auditory cortex. However, the functional integrity of the auditory cortex cannot be determined by this method. We have defined the extent of activation of the auditory cortex and auditory association cortex in 6 normal controls and 6 cochlear implant patients using 15 O PET functional brain imaging methods. Preliminary results have indicated the potential clinical utility of 15 O PET cortical mapping in the pre-surgical assessment and post-surgical follow up of cochlear implant patients. Copyright (1998) Australian Neuroscience Society

  17. Plastic changes in the central auditory system after hearing loss, restoration of function, and during learning

    Czech Academy of Sciences Publication Activity Database

    Syka, Josef

    2002-01-01

    Roč. 82, - (2002), s. 601-636 ISSN 0031-9333 R&D Projects: GA MZd NK6454 Institutional research plan: CEZ:AV0Z5039906 Keywords : auditory system Subject RIV: FH - Neurology Impact factor: 26.533, year: 2002

  18. Auditory sensory memory in 2-year-old children: an event-related potential study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-03-26

    Auditory sensory memory is assumed to play an important role in cognitive development, but little is known about it in young children. The aim of this study was to estimate the duration of auditory sensory memory in 2-year-old children. We recorded the mismatch negativity in response to tone stimuli presented with different interstimulus intervals. Our findings suggest that in 2-year-old children the memory representation of the standard tone remains in the sensory memory store for at least 1 s but for less than 2 s. Recording the mismatch negativity with stimuli presented at various interstimulus intervals seems to be a useful method for studying the relationship between auditory sensory memory and normal and disturbed cognitive development.

  19. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  20. Altered auditory BOLD response to conspecific birdsong in zebra finches with stuttered syllables.

    Directory of Open Access Journals (Sweden)

    Henning U Voss

    2010-12-01

    Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.

  1. Expressive vocabulary and auditory processing in children with deviant speech acquisition.

    Science.gov (United States)

    Quintas, Victor Gandra; Mezzomo, Carolina Lisbôa; Keske-Soares, Márcia; Dias, Roberta Freitas

    2010-01-01

    expressive vocabulary and auditory processing in children with phonological disorder. to compare the performance of children with phonological disorder in a vocabulary test with the parameters indicated by the same test and to verify a possible relationship between this performance and auditory processing deficits. participants were 12 children diagnosed with phonological disorders, with ages ranging from 5 to 7 years, of both genders. Vocabulary was assessed using the ABFW language test and the simplified auditory processing evaluation (sorting), Alternate Dichotic Dissyllable - Staggered Spondaic Word (SSW), Pitch Pattern Sequence (PPS) and the Binaural Fusion Test (BF). considering performance in the vocabulary test, all children obtained results with no significant statistical. As for the auditory processing assessment, all children presented better results than expected; the only exception was on the sorting process testing, where the mean accuracy score was of 8.25. Regarding the performance in the other auditory processing tests, the mean accuracy averages were 6.50 in the SSW, 10.74 in the PPS and 7.10 in the BF. When correlating the performance obtained in both assessments, considering p>0.05, the results indicated that, despite the normality, the lower the value obtained in the auditory processing assessment, the lower the accuracy presented in the vocabulary test. A trend was observed for the semantic fields of "means of transportation and professions". Considering the classification categories of the vocabulary test, the SP (substitution processes) were the categories that presented the higher significant increase in all semantic fields. there is a correlation between the auditory processing and the lexicon, where vocabulary can be influenced in children with deviant speech acquisition.

  2. Auditory Short-Term Memory Capacity Correlates with Gray Matter Density in the Left Posterior STS in Cognitively Normal and Dyslexic Adults

    Science.gov (United States)

    Richardson, Fiona M.; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M.; Leff, Alex P.; Price, Cathy J.

    2011-01-01

    A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using…

  3. Presbycusis and auditory brainstem responses: a review

    Directory of Open Access Journals (Sweden)

    Shilpa Khullar

    2011-06-01

    Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.

  4. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  5. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis

    2017-02-01

    It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing

  7. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  8. Developmental profiles of the intrinsic properties and synaptic function of auditory neurons in preterm and term baboon neonates.

    Science.gov (United States)

    Kim, Sei Eun; Lee, Seul Yi; Blanco, Cynthia L; Kim, Jun Hee

    2014-08-20

    The human fetus starts to hear and undergoes major developmental changes in the auditory system during the third trimester of pregnancy. Although there are significant data regarding development of the auditory system in rodents, changes in intrinsic properties and synaptic function of auditory neurons in developing primate brain at hearing onset are poorly understood. We performed whole-cell patch-clamp recordings of principal neurons in the medial nucleus of trapezoid body (MNTB) in preterm and term baboon brainstem slices to study the structural and functional maturation of auditory synapses. Each MNTB principal neuron received an excitatory input from a single calyx of Held terminal, and this one-to-one pattern of innervation was already formed in preterm baboons delivered at 67% of normal gestation. There was no difference in frequency or amplitude of spontaneous excitatory postsynaptic synaptic currents between preterm and term MNTB neurons. In contrast, the frequency of spontaneous GABA(A)/glycine receptor-mediated inhibitory postsynaptic synaptic currents, which were prevalent in preterm MNTB neurons, was significantly reduced in term MNTB neurons. Preterm MNTB neurons had a higher input resistance than term neurons and fired in bursts, whereas term MNTB neurons fired a single action potential in response to suprathreshold current injection. The maturation of intrinsic properties and dominance of excitatory inputs in the primate MNTB allow it to take on its mature role as a fast and reliable relay synapse. Copyright © 2014 the authors 0270-6474/14/3411399-06$15.00/0.

  9. Dichotic and dichoptic digit perception in normal adults.

    Science.gov (United States)

    Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T

    2011-06-01

    Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a

  10. Auditory verbal memory and psychosocial symptoms are related in children with idiopathic epilepsy.

    Science.gov (United States)

    Schaffer, Yael; Ben Zeev, Bruria; Cohen, Roni; Shuper, Avinoam; Geva, Ronny

    2015-07-01

    Idiopathic epilepsies are considered to have relatively good prognoses and normal or near normal developmental outcomes. Nevertheless, accumulating studies demonstrate memory and psychosocial deficits in this population, and the prevalence, severity and relationships between these domains are still not well defined. We aimed to assess memory, psychosocial function, and the relationships between these two domains among children with idiopathic epilepsy syndromes using an extended neuropsychological battery and psychosocial questionnaires. Cognitive abilities, neuropsychological performance, and socioemotional behavior of 33 early adolescent children, diagnosed with idiopathic epilepsy, ages 9-14years, were assessed and compared with 27 age- and education-matched healthy controls. Compared to controls, patients with stabilized idiopathic epilepsy exhibited higher risks for short-term memory deficits (auditory verbal and visual) (pmemory deficits (plong-term memory deficits (pmemory deficits was related to severity of psychosocial symptoms among the children with epilepsy but not in the healthy controls. Results suggest that deficient auditory verbal memory may be compromising psychosocial functioning in children with idiopathic epilepsy, possibly underscoring that cognitive variables, such as auditory verbal memory, should be assessed and treated in this population to prevent secondary symptoms. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Comprehensive evaluation of a child with an auditory brainstem implant.

    Science.gov (United States)

    Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P

    2008-02-01

    We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.

  12. Testing resonating vector strength: Auditory system, electric fish, and noise

    Science.gov (United States)

    Leo van Hemmen, J.; Longtin, André; Vollmayr, Andreas N.

    2011-12-01

    Quite often a response to some input with a specific frequency ν○ can be described through a sequence of discrete events. Here, we study the synchrony vector, whose length stands for the vector strength, and in doing so focus on neuronal response in terms of spike times. The latter are supposed to be given by experiment. Instead of singling out the stimulus frequency ν○ we study the synchrony vector as a function of the real frequency variable ν. Its length turns out to be a resonating vector strength in that it shows clear maxima in the neighborhood of ν○ and multiples thereof, hence, allowing an easy way of determining response frequencies. We study this "resonating" vector strength for two concrete but rather different cases, viz., a specific midbrain neuron in the auditory system of cat and a primary detector neuron belonging to the electric sense of the wave-type electric fish Apteronotus leptorhynchus. We show that the resonating vector strength always performs a clear resonance correlated with the phase locking that it quantifies. We analyze the influence of noise and demonstrate how well the resonance associated with maximal vector strength indicates the dominant stimulus frequency. Furthermore, we exhibit how one can obtain a specific phase associated with, for instance, a delay in auditory analysis.

  13. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  14. Pip and pop : Non-spatial auditory signals improve spatial visual search

    NARCIS (Netherlands)

    Burg, E. van der; Olivers, C.N.L.; Bronkhorst, A.W.; Theeuwes, J.

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though

  15. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  16. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  17. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  18. Measuring the performance of visual to auditory information conversion.

    Directory of Open Access Journals (Sweden)

    Shern Shiou Tan

    Full Text Available BACKGROUND: Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. METHODOLOGY: Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID and inter sound distance (ISD whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. CONCLUSIONS: With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  19. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Science.gov (United States)

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  20. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Directory of Open Access Journals (Sweden)

    Scott A Stone

    Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  1. Auditory Brainstem Response Wave Amplitude Characteristics as a Diagnostic Tool in Children with Speech Delay with Unknown Causes

    Directory of Open Access Journals (Sweden)

    Susan Abadi

    2016-09-01

    Full Text Available Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group and 261 age- and sex-matched normally developing children (control group. Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V than did the control group (P=0.001. These amplitudes were significantly reduced after 1 year (P=0.001; however, they were still significantly higher than those of the control group (P=0.001. The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients’ improvement after treatment.

  2. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  3. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    Science.gov (United States)

    Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn

    2014-01-01

    The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise

  4. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    Directory of Open Access Journals (Sweden)

    Ann-Cathrine Lindblad

    Full Text Available The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group

  5. The maturational process of the auditory system in the first year of life characterized by brainstem auditory evoked potentials

    Directory of Open Access Journals (Sweden)

    Raquel Beltrão Amorim

    2009-01-01

    Full Text Available The study of brainstem auditory evoked potentials (BAEP allows obtaining the electrophysiological activity generated in the cochlear nerve to the inferior colliculus. In the first months of life, a period of greater neuronal plasticity, important changes are observed in the absolute latency and inter-peak intervals of BAEP, which occur up to the completion of the maturational process, around 18 months of life in full-term newborns, when the response is similar to that of adults. OBJECTIVE: The goal of this study was to establish normal values of absolute latencies for waves I, III and V and inter-peak intervals I-III, III-V and I-V of the BAEP performed in full-term infants attending the Infant Hearing Health Program of the Speech-Language Pathology and Audiology Course at Bauru School of Dentistry, Brazil, with no risk history for hearing impairment. MATERIAL AND METHODS: The stimulation parameters were: rarefaction click stimulus presented by the 3ª insertion phone, intensity of 80 dBnHL and a rate of 21.1 c/s, band-pass filter of 30 and 3,000 Hz and average of 2,000 stimuli. A sample of 86 infants was first divided according to their gestational age in preterm (n=12 and full-term (n=74, and then according to their chronological age in three periods: P1: 0 to 29 days (n=46, P2: 30 days to 5 months 29 days (n=28 and P3: above 6 months (n= 12. RESULTS: The absolute latency of wave I was similar to that of adults, generally in the 1st month of life, demonstrating a complete process maturity of the auditory nerve. For waves III and V, there was a gradual decrease of absolute latencies with age, characterizing the maturation of axons and synaptic mechanisms in the brainstem level. CONCLUSION: Age proved to be a determining factor in the absolute latency of the BAEP components, especially those generated in the brainstem, in the first year of life.

  6. Psychometric properties of Persian version of the Sustained Auditory Attention Capacity Test in children with attention deficit-hyperactivity disorder.

    Science.gov (United States)

    Soltanparast, Sanaz; Jafari, Zahra; Sameni, Seyed Jalal; Salehi, Masoud

    2014-01-01

    The purpose of the present study was to evaluate the psychometric properties (validity and reliability) of the Persian version of the Sustained Auditory Attention Capacity Test in children with attention deficit hyperactivity disorder. The Persian version of the Sustained Auditory Attention Capacity Test was constructed to assess sustained auditory attention using the method provided by Feniman and colleagues (2007). In this test, comments were provided to assess the child's attentional deficit by determining inattention and impulsiveness error, the total scores of the sustained auditory attention capacity test and attention span reduction index. In the present study for determining the validity and reliability of in both Rey Auditory Verbal Learning test and the Persian version of the Sustained Auditory Attention Capacity Test (SAACT), 46 normal children and 41 children with Attention Deficit Hyperactivity (ADHD), all right-handed and aged between 7 and 11 of both genders, were evaluated. In determining convergent validity, a negative significant correlation was found between the three parts of the Rey Auditory Verbal Learning test (first, fifth, and immediate recall) and all indicators of the SAACT except attention span reduction. By comparing the test scores between the normal and ADHD groups, discriminant validity analysis showed significant differences in all indicators of the test except for attention span reduction (pAttention Capacity test has good validity and reliability, that matches other reliable tests, and it can be used for the identification of children with attention deficits and if they suspected to have Attention Deficit Hyperactivity Disorder.

  7. Systemic inhibition of mTOR kinase via rapamycin disrupts consolidation and reconsolidation of auditory fear memory.

    Science.gov (United States)

    Mac Callum, Phillip E; Hebert, Mark; Adamec, Robert E; Blundell, Jacqueline

    2014-07-01

    The mammalian target of rapamycin (mTOR) kinase is a critical regulator of mRNA translation and is known to be involved in various long lasting forms of synaptic and behavioural plasticity. However, information concerning the temporal pattern of mTOR activation and susceptibility to pharmacological intervention during both consolidation and reconsolidation of long-term memory (LTM) remains scant. Male C57BL/6 mice were injected systemically with rapamycin at various time points following conditioning or retrieval in an auditory fear conditioning paradigm, and compared to vehicle (and/or anisomycin) controls for subsequent memory recall. Systemic blockade of mTOR with rapamycin immediately or 12h after training or reactivation impairs both consolidation and reconsolidation of an auditory fear memory. Further behavioural analysis revealed that the enduring effects of rapamycin on reconsolidation are dependent upon reactivation of the memory trace. Rapamycin, however, has no effect on short-term memory or the ability to retrieve an established fear memory. Collectively, our data suggest that biphasic mTOR signalling is essential for both consolidation and reconsolidation-like activities that contribute to the formation, re-stabilization, and persistence of long term auditory-fear memories, while not influencing other aspects of the memory trace. These findings also provide evidence for a cogent treatment model for reducing the emotional strength of established, traumatic memories analogous to those observed in acquired anxiety disorders such as posttraumatic stress disorder (PTSD) and specific phobias, through pharmacologic blockade of mTOR using systemic rapamycin following reactivation. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  9. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Tae, Woo Suk; Yakunina, Natalia; Nam, Eui-Cheol; Kim, Tae Su; Kim, Sam Soo

    2014-01-01

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  10. Modulation of mGlu2 Receptors, but Not PDE10A Inhibition Normalizes Pharmacologically-Induced Deviance in Auditory Evoked Potentials and Oscillations in Conscious Rats.

    Directory of Open Access Journals (Sweden)

    Abdallah Ahnaou

    Full Text Available Improvement of cognitive impairments represents a high medical need in the development of new antipsychotics. Aberrant EEG gamma oscillations and reductions in the P1/N1 complex peak amplitude of the auditory evoked potential (AEP are neurophysiological biomarkers for schizophrenia that indicate disruption in sensory information processing. Inhibition of phosphodiesterase (i.e. PDE10A and activation of metabotropic glutamate receptor (mGluR2 signaling are believed to provide antipsychotic efficacy in schizophrenia, but it is unclear whether this occurs with cognition-enhancing potential. The present study used the auditory paired click paradigm in passive awake Sprague Dawley rats to 1 model disruption of AEP waveforms and oscillations as observed in schizophrenia by peripheral administration of amphetamine and the N-methyl-D-aspartate (NMDA antagonist phencyclidine (PCP; 2 confirm the potential of the antipsychotics risperidone and olanzapine to attenuate these disruptions; 3 evaluate the potential of mGluR2 agonist LY404039 and PDE10 inhibitor PQ-10 to improve AEP deficits in both the amphetamine and PCP models. PCP and amphetamine disrupted auditory information processing to the first click, associated with suppression of the P1/N1 complex peak amplitude, and increased cortical gamma oscillations. Risperidone and olanzapine normalized PCP and amphetamine-induced abnormalities in AEP waveforms and aberrant gamma/alpha oscillations, respectively. LY404039 increased P1/N1 complex peak amplitudes and potently attenuated the disruptive effects of both PCP and amphetamine on AEPs amplitudes and oscillations. However, PQ-10 failed to show such effect in either models. These outcomes indicate that modulation of the mGluR2 results in effective restoration of abnormalities in AEP components in two widely used animal models of psychosis, whereas PDE10A inhibition does not.

  11. For Better or Worse: The Effect of Prismatic Adaptation on Auditory Neglect

    Directory of Open Access Journals (Sweden)

    Isabel Tissieres

    2017-01-01

    Full Text Available Patients with auditory neglect attend less to auditory stimuli on their left and/or make systematic directional errors when indicating sound positions. Rightward prismatic adaptation (R-PA was repeatedly shown to alleviate symptoms of visuospatial neglect and once to restore partially spatial bias in dichotic listening. It is currently unknown whether R-PA affects only this ear-related symptom or also other aspects of auditory neglect. We have investigated the effect of R-PA on left ear extinction in dichotic listening, space-related inattention assessed by diotic listening, and directional errors in auditory localization in patients with auditory neglect. The most striking effect of R-PA was the alleviation of left ear extinction in dichotic listening, which occurred in half of the patients with initial deficit. In contrast to nonresponders, their lesions spared the right dorsal attentional system and posterior temporal cortex. The beneficial effect of R-PA on an ear-related performance contrasted with detrimental effects on diotic listening and auditory localization. The former can be parsimoniously explained by the SHD-VAS model (shift in hemispheric dominance within the ventral attentional system; Clarke and Crottaz-Herbette 2016, which is based on the R-PA-induced shift of the right-dominant ventral attentional system to the left hemisphere. The negative effects in space-related tasks may be due to the complex nature of auditory space encoding at a cortical level.

  12. Nontumorous enlargement of the internal auditory canal. A risk factor for sensorineural hearing loss? A high resolution CT-study

    Energy Technology Data Exchange (ETDEWEB)

    Stimmer, H.; Rummeny, E.J. [Technical University Munich, Klinikum rechts der Isar (Germany). Dept. of Radiology; Niedermeyer, H.P. [Technical University Munich, Klinikum rechts der Isar (Germany). ENT-Clinic; Kehl, V. [Technical University Munich, Klinikum rechts der Isar (Germany). Inst. for Medical Statistics and Epidemiology

    2015-06-15

    First aim of the study was to define normal shape and diameter of the internal auditory canal (IAC). In the second part the clinical relevance of IAC-enlargement was analyzed, considering also lesions of the subtle structures at the fundus of the internal auditory canal. 440 high resolution CT-scans of the temporal bone were used for retrospective analysis of the internal auditory canal and its fundus region. The mean value of the IAC diameter in axial and coronal plane was determined. In 20 of 440 patients IAC enlargement was found. In the group with pronounced enlargement (3fold SD) nearly all patients suffered from hearing impairment. In some of them we found structural abnormalities near the IAC fundus in the CSF/perilymph border zone. A new CT-based definition of normal shape and diameter of the internal auditory canal is presented. There is some evidence that a pathologic transmission of CSF-pressure in case of IAC-enlargement and/or abnormal fistulous communications could play an important role in the pathophysiology of hearing loss.

  13. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  14. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  15. Structural changes in the adult rat auditory system induced by brief postnatal noise exposure

    Czech Academy of Sciences Publication Activity Database

    Ouda, Ladislav; Burianová, Jana; Balogová, Zuzana; Lu, H. P.; Syka, Josef

    2016-01-01

    Roč. 221, č. 1 (2016), s. 617-629 ISSN 1863-2653 R&D Projects: GA ČR(CZ) GCP303/11/J005; GA ČR(CZ) GAP303/12/1347; GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:68378041 Keywords : noise exposure * critical period * central auditory system Subject RIV: FH - Neurology Impact factor: 4.698, year: 2016

  16. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  17. A comparative study of simple auditory reaction time in blind (congenitally) and sighted subjects.

    Science.gov (United States)

    Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A; Mehta, H B; Shah, C J

    2013-07-01

    Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. STUDY HAD BEEN CARRIED OUT IN TWO GROUPS: The 1(st) of 50 congenitally blind subjects and 2(nd) group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.

  18. Effects of chronic exposure to electromagnetic waves on the auditory system.

    Science.gov (United States)

    Özgür, Abdulkadir; Tümkaya, Levent; Terzi, Suat; Kalkan, Yıldıray; Erdivanlı, Özlem Çelebi; Dursun, Engin

    2015-08-01

    The results support that chronic electromagnetic field exposure may cause damage by leading to neuronal degeneration of the auditory system. Numerous researches have been done about the risks of exposure to the electromagnetic fields that occur during the use of these devices, especially the effects on hearing. The aim of this study is to evaluate the effects of the electromagnetic waves emitted by the mobile phones through the electrophysiological and histological methods. Twelve adult Wistar albino rats were included in the study. The rats were divided into two groups of six rats. The study group was exposed to the electromagnetic waves over a period of 30 days. The control group was not given any exposure to the electromagnetic fields. After the completion of the electromagnetic wave application, the auditory brainstem responses of both groups were recorded under anesthesia. The degeneration of cochlear nuclei was graded by two different histologists, both of whom were blinded to group information. The histopathologic and immunohistochemical analysis showed neuronal degeneration signs, such as increased vacuolization in the cochlear nucleus, pyknotic cell appearance, and edema in the group exposed to the electromagnetic fields compared to the control group. The average latency of wave in the ABR was similar in both groups (p > 0.05).

  19. Transcriptional maturation of the mouse auditory forebrain.

    Science.gov (United States)

    Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly

    2015-08-14

    The maturation of the brain involves the coordinated expression of thousands of genes, proteins and regulatory elements over time. In sensory pathways, gene expression profiles are modified by age and sensory experience in a manner that differs between brain regions and cell types. In the auditory system of altricial animals, neuronal activity increases markedly after the opening of the ear canals, initiating events that culminate in the maturation of auditory circuitry in the brain. This window provides a unique opportunity to study how gene expression patterns are modified by the onset of sensory experience through maturity. As a tool for capturing these features, next-generation sequencing of total RNA (RNAseq) has tremendous utility, because the entire transcriptome can be screened to index expression of any gene. To date, whole transcriptome profiles have not been generated for any central auditory structure in any species at any age. In the present study, RNAseq was used to profile two regions of the mouse auditory forebrain (A1, primary auditory cortex; MG, medial geniculate) at key stages of postnatal development (P7, P14, P21, adult) before and after the onset of hearing (~P12). Hierarchical clustering, differential expression, and functional geneset enrichment analyses (GSEA) were used to profile the expression patterns of all genes. Selected genesets related to neurotransmission, developmental plasticity, critical periods and brain structure were highlighted. An accessible repository of the entire dataset was also constructed that permits extraction and screening of all data from the global through single-gene levels. To our knowledge, this is the first whole transcriptome sequencing study of the forebrain of any mammalian sensory system. Although the data are most relevant for the auditory system, they are generally applicable to forebrain structures in the visual and somatosensory systems, as well. The main findings were: (1) Global gene expression

  20. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  1. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  2. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study.

    Science.gov (United States)

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (p<0.01, 75.6%). There were no significant correlations between the GIN test and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.

  3. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    Science.gov (United States)

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  4. Stimulator with arbitrary waveform for auditory evoked potentials

    International Nuclear Information System (INIS)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J

    2007-01-01

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential

  5. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  6. Mobile phones: influence on auditory and vestibular systems.

    Science.gov (United States)

    Balbani, Aracy Pereira Silveira; Montovani, Jair Cortez

    2008-01-01

    Telecommunications systems emit radiofrequency, which is an invisible electromagnetic radiation. Mobile phones operate with microwaves (450900 MHz in the analog service, and 1,82,2 GHz in the digital service) very close to the users ear. The skin, inner ear, cochlear nerve and the temporal lobe surface absorb the radiofrequency energy. literature review on the influence of cellular phones on hearing and balance. systematic review. We reviewed papers on the influence of mobile phones on auditory and vestibular systems from Lilacs and Medline databases, published from 2000 to 2005, and also materials available in the Internet. Studies concerning mobile phone radiation and risk of developing an acoustic neuroma have controversial results. Some authors did not see evidences of a higher risk of tumor development in mobile phone users, while others report that usage of analog cellular phones for ten or more years increase the risk of developing the tumor. Acute exposure to mobile phone microwaves do not influence the cochlear outer hair cells function in vivo and in vitro, the cochlear nerve electrical properties nor the vestibular system physiology in humans. Analog hearing aids are more susceptible to the electromagnetic interference caused by digital mobile phones. there is no evidence of cochleo-vestibular lesion caused by cellular phones.

  7. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  8. Neural responses to silent lipreading in normal hearing male and female subjects

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Albers, Frans; van Dijk, Pim; Wit, Hero; Willemsen, Antoon

    In the past, researchers investigated silent lipreading in normal hearing subjects with functional neuroimaging tools and showed how the brain processes visual stimuli that are normally accompanied by an auditory counterpart. Previously, we showed activation differences between males and females in

  9. The cochlear nerve canal and internal auditory canal in children with normal cochlea but cochlear nerve deficiency

    International Nuclear Information System (INIS)

    Yan, Fei; Li, Jianhong; Xian, Junfang; Wang, Zhenchang; Mo, Lingyan

    2013-01-01

    Background: There is an increasing frequency of requests for cochlear implantation (CI) in deaf children and more detailed image information is necessary for selecting appropriate candidates. Cochlear nerve deficiency (CND) is a contraindication to CI. Magnetic resonance imaging (MRI) has been used to evaluate the integrity of the cochlear nerve. The abnormalities of the cochlear nerve canal (CNC) and internal auditory canal (IAC) have been reported to be associated with CND. Purpose: To correlate CNC manifestation, size, and IAC diameter on high-resolution CT (HRCT) with CND diagnosed by MRI in children. Material and Methods: HRCT images from 35 sensorineurally deaf children who had normal cochlea but bilateral or unilateral CND diagnosed by MRI were studied retrospectively. The CNC and IAC manifestation and size were assessed and correlated with CND. Results: CND was diagnosed by MRI in 54/70 ears (77.1%). Thirty-two ears had an absent cochlear nerve (59.3%), while 22 ears had a small cochlear nerve (40.7%). The CNC diameter was 2.0 mm in 11 ears (20.4%). The IAC diameter was 3.0 mm in 29 ears (53.7%). Conclusion: The hypoplastic CNC might be more highly indicative of CND than that of a narrow IAC

  10. Brainstem auditory evoked potentials with the use of acoustic clicks and complex verbal sounds in young adults with learning disabilities.

    Science.gov (United States)

    Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos

    2013-01-01

    Acoustic signals are transmitted through the external and middle ear mechanically to the cochlea where they are transduced into electrical impulse for further transmission via the auditory nerve. The auditory nerve encodes the acoustic sounds that are conveyed to the auditory brainstem. Multiple brainstem nuclei, the cochlea, the midbrain, the thalamus, and the cortex constitute the central auditory system. In clinical practice, auditory brainstem responses (ABRs) to simple stimuli such as click or tones are widely used. Recently, complex stimuli or complex auditory brain responses (cABRs), such as monosyllabic speech stimuli and music, are being used as a tool to study the brainstem processing of speech sounds. We have used the classic 'click' as well as, for the first time, the artificial successive complex stimuli 'ba', which constitutes the Greek word 'baba' corresponding to the English 'daddy'. Twenty young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) comprised the diseased group. Twenty sex-, age-, education-, hearing sensitivity-, and IQ-matched normal subjects comprised the control group. Measurements included the absolute latencies of waves I through V, the interpeak latencies elicited by the classical acoustic click, the negative peak latencies of A and C waves, as well as the interpeak latencies of A-C elicited by the verbal stimulus 'baba' created on a digital speech synthesizer. The absolute peak latencies of waves I, III, and V in response to monoaural rarefaction clicks as well as the interpeak latencies I-III, III-V, and I-V in the dyslexic subjects, although increased in comparison with normal subjects, did not reach the level of a significant difference (pwave C and the interpeak latencies of A-C elicited by verbal stimuli were found to be increased in the dyslexic group in comparison with the control group (p=0.0004 and p=0.045, respectively). In the subgroup consisting of 10 patients suffering from

  11. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  12. Trait aspects of auditory mismatch negativity predict response to auditory training in individuals with early illness schizophrenia.

    Science.gov (United States)

    Biagianti, Bruno; Roach, Brian J; Fisher, Melissa; Loewy, Rachel; Ford, Judith M; Vinogradov, Sophia; Mathalon, Daniel H

    2017-01-01

    Individuals with schizophrenia have heterogeneous impairments of the auditory processing system that likely mediate differences in the cognitive gains induced by auditory training (AT). Mismatch negativity (MMN) is an event-related potential component reflecting auditory echoic memory, and its amplitude reduction in schizophrenia has been linked to cognitive deficits. Therefore, MMN may predict response to AT and identify individuals with schizophrenia who have the most to gain from AT. Furthermore, to the extent that AT strengthens auditory deviance processing, MMN may also serve as a readout of the underlying changes in the auditory system induced by AT. Fifty-six individuals early in the course of a schizophrenia-spectrum illness (ESZ) were randomly assigned to 40 h of AT or Computer Games (CG). Cognitive assessments and EEG recordings during a multi-deviant MMN paradigm were obtained before and after AT and CG. Changes in these measures were compared between the treatment groups. Baseline and trait-like MMN data were evaluated as predictors of treatment response. MMN data collected with the same paradigm from a sample of Healthy Controls (HC; n = 105) were compared to baseline MMN data from the ESZ group. Compared to HC, ESZ individuals showed significant MMN reductions at baseline ( p = .003). Reduced Double-Deviant MMN was associated with greater general cognitive impairment in ESZ individuals ( p = .020). Neither ESZ intervention group showed significant change in MMN. We found high correlations in all MMN deviant types (rs = .59-.68, all ps < .001) between baseline and post-intervention amplitudes irrespective of treatment group, suggesting trait-like stability of the MMN signal. Greater deficits in trait-like Double-Deviant MMN predicted greater cognitive improvements in the AT group ( p = .02), but not in the CG group. In this sample of ESZ individuals, AT had no effect on auditory deviance processing as assessed by MMN. In ESZ individuals, baseline MMN

  13. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  14. Inhibition of histone deacetylase 3 via RGFP966 facilitates cortical plasticity underlying unusually accurate auditory associative cue memory for excitatory and inhibitory cue-reward associations.

    Science.gov (United States)

    Shang, Andrea; Bylipudi, Sooraz; Bieszczad, Kasia M

    2018-05-31

    Epigenetic mechanisms are key for regulating long-term memory (LTM) and are known to exert control on memory formation in multiple systems of the adult brain, including the sensory cortex. One epigenetic mechanism is chromatin modification by histone acetylation. Blocking the action of histone de-acetylases (HDACs) that normally negatively regulate LTM by repressing transcription, has been shown to enable memory formation. Indeed, HDAC-inhibition appears to facilitate memory by altering the dynamics of gene expression events important for memory consolidation. However less understood are the ways in which molecular-level consolidation processes alter subsequent memory to enhance storage or facilitate retrieval. Here we used a sensory perspective to investigate whether the characteristics of memory formed with HDAC inhibitors are different from naturally-formed memory. One possibility is that HDAC inhibition enables memory to form with greater sensory detail than normal. Because the auditory system undergoes learning-induced remodeling that provides substrates for sound-specific LTM, we aimed to identify behavioral effects of HDAC inhibition on memory for specific sound features using a standard model of auditory associative cue-reward learning, memory, and cortical plasticity. We found that three systemic post-training treatments of an HDAC3-inhibitor (RGPF966, Abcam Inc.) in rats in the early phase of training facilitated auditory discriminative learning, changed auditory cortical tuning, and increased the specificity for acoustic frequency formed in memory of both excitatory (S+) and inhibitory (S-) associations for at least 2 weeks. The findings support that epigenetic mechanisms act on neural and behavioral sensory acuity to increase the precision of associative cue memory, which can be revealed by studying the sensory characteristics of long-term associative memory formation with HDAC inhibitors. Published by Elsevier B.V.

  15. Ventilatory response to induced auditory arousals during NREM sleep.

    Science.gov (United States)

    Badr, M S; Morgan, B J; Finn, L; Toiber, F S; Crabtree, D C; Puleo, D S; Skatrud, J B

    1997-09-01

    Sleep state instability is a potential mechanism of central apnea/hypopnea during non-rapid eye movement (NREM) sleep. To investigate this postulate, we induced brief arousals by delivering transient (0.5 second) auditory stimuli during stable NREM sleep in eight normal subjects. Arousal was determined according to American Sleep Disorders Association (ASDA) criteria. A total of 96 trials were conducted; 59 resulted in cortical arousal and 37 did not result in arousal. In trials associated with arousal, minute ventilation (VE) increased from 5.1 +/- 1.24 minutes to 7.5 +/- 2.24 minutes on the first posttone breath (p = 0.001). However, no subsequent hypopnea or apnea occurred as VE decreased gradually to 4.8 +/- 1.5 l/minute (p > 0.05) on the fifth posttone breath. Trials without arousal did not result in hyperpnea on the first breath nor subsequent hypopnea. We conclude that 1) auditory stimulation resulted in transient hyperpnea only if associated with cortical arousal; 2) hypopnea or apnea did not occur following arousal-induced hyperpnea in normal subjects; 3) interaction with fluctuating chemical stimuli or upper airway resistance may be required for arousals to cause sleep-disordered breathing.

  16. The effect of noise exposure during the developmental period on the function of the auditory system

    Czech Academy of Sciences Publication Activity Database

    Bureš, Zbyněk; Popelář, Jiří; Syka, Josef

    2017-01-01

    Roč. 352, sep (2017), s. 1-11 ISSN 0378-5955 R&D Projects: GA ČR(CZ) GAP303/12/1347 Institutional support: RVO:68378041 Keywords : auditory system * development * noise exposure Subject RIV: FH - Neurology OBOR OECD: Other medical science Impact factor: 2.906, year: 2016

  17. Three-dimensional Acoustic Localisation via Directed Movements of a Two-dimensional Model of the Lizard Peripheral Auditory System

    DEFF Research Database (Denmark)

    Shaikh, Danish; Kjær Schmidt, Michael

    2017-01-01

    of the acoustic target with respect to one plane of rotation. A multi-layer perceptron neural network is trained via supervised learning to translate the combination of the two measurements into an estimate of the relative location of the acoustic target in terms of its azimuth and elevation. The acoustic...... localisation performance of the system is evaluated in simulation for noiseless as well as noisy sinusoidal auditory signals with a 20 dB signal-to-noise ratio for four different sound frequencies of 1450 Hz, 1650 Hz, 1850 Hz and 2050 Hz that span the response frequency range of the peripheral auditory model...

  18. Auditory interfaces in automated driving: an international survey

    Directory of Open Access Journals (Sweden)

    Pavlo Bazilinskyy

    2015-08-01

    Full Text Available This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA and a forward collision warning system (FCWS, as well as towards a futuristic augmented sound system (FS proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated the willingness to have automated versions of these systems as 3.87 and 3.77, respectively (on a scale from 1 = disagree strongly to 5 = agree strongly. The respondents tolerated the FS (the mean willingness to use it was 3.00 on the same scale. The results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.

  19. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  20. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  1. Feature conjunctions and auditory sensory memory.

    Science.gov (United States)

    Sussman, E; Gomes, H; Nousak, J M; Ritter, W; Vaughan, H G

    1998-05-18

    This study sought to obtain additional evidence that transient auditory memory stores information about conjunctions of features on an automatic basis. The mismatch negativity of event-related potentials was employed because its operations are based on information that is stored in transient auditory memory. The mismatch negativity was found to be elicited by a tone that differed from standard tones in a combination of its perceived location and frequency. The result lends further support to the hypothesis that the system upon which the mismatch negativity relies processes stimuli in an holistic manner. Copyright 1998 Elsevier Science B.V.

  2. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  3. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  5. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    Science.gov (United States)

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  6. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  7. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  8. Auditory evoked potentials: predicting speech therapy outcomes in children with phonological disorders

    Directory of Open Access Journals (Sweden)

    Renata Aparecida Leite

    2014-03-01

    Full Text Available OBJECTIVES: This study investigated whether neurophysiologic responses (auditory evoked potentials differ between typically developed children and children with phonological disorders and whether these responses are modified in children with phonological disorders after speech therapy. METHODS: The participants included 24 typically developing children (Control Group, mean age: eight years and ten months and 23 children clinically diagnosed with phonological disorders (Study Group, mean age: eight years and eleven months. Additionally, 12 study group children were enrolled in speech therapy (Study Group 1, and 11 were not enrolled in speech therapy (Study Group 2. The subjects were submitted to the following procedures: conventional audiological, auditory brainstem response, auditory middle-latency response, and P300 assessments. All participants presented with normal hearing thresholds. The study group 1 subjects were reassessed after 12 speech therapy sessions, and the study group 2 subjects were reassessed 3 months after the initial assessment. Electrophysiological results were compared between the groups. RESULTS: Latency differences were observed between the groups (the control and study groups regarding the auditory brainstem response and the P300 tests. Additionally, the P300 responses improved in the study group 1 children after speech therapy. CONCLUSION: The findings suggest that children with phonological disorders have impaired auditory brainstem and cortical region pathways that may benefit from speech therapy.

  9. Speech processing: from peripheral to hemispheric asymmetry of the auditory system.

    Science.gov (United States)

    Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier

    2012-01-01

    Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  10. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences.

    Science.gov (United States)

    Grose, John H; Buss, Emily; Hall, Joseph W

    2017-01-01

    The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.

  11. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  12. Effects of a Combined 3-D Auditory/visual Cueing System on Visual Target Detection Using a Helmet-Mounted Display

    National Research Council Canada - National Science Library

    Pinedo, Carlos; Young, Laurence; Esken, Robert

    2005-01-01

    ..., and the development and evaluation of the NDFR symbology for on/off-boresight viewing. The localized auditory research includes looking at the benefits of augmenting the Terrain Collision Avoidance System (TCAS...

  13. Acceptance of background noise, working memory capacity, and auditory evoked potentials in subjects with normal hearing.

    Science.gov (United States)

    Brännström, K Jonas; Zunic, Edita; Borovac, Aida; Ibertsson, Tina

    2012-01-01

    The acceptable noise level (ANL) test is a method for quantifying the amount of background noise that subjects accept when listening to speech. Large variations in ANL have been seen between normal-hearing subjects and between studies of normal-hearing subjects, but few explanatory variables have been identified. To explore a possible relationship between a Swedish version of the ANL test, working memory capacity (WMC), and auditory evoked potentials (AEPs). ANL, WMC, and AEP were tested in a counterbalanced order across subjects. Twenty-one normal-hearing subjects participated in the study (14 females and 7 males; aged 20-39 yr with an average of 25.7 yr). Reported data consists of age, pure-tone average (PTA), most comfortable level (MCL), background noise level (BNL), ANL (i.e., MCL - BNL), AEP latencies, AEP amplitudes, and WMC. Spearman's rank correlation coefficient was calculated between the collected variables to investigate associations. A principal component analysis (PCA) with Varimax rotation was conducted on the collected variables to explore underlying factors and estimate interactions between the tested variables. Subjects were also pooled into two groups depending on their results on the WMC test, one group with a score lower than the average and one with a score higher than the average. Comparisons between these two groups were made using the Mann-Whitney U-test with Bonferroni correction for multiple comparisons. A negative association was found between ANL and WMC but not between AEP and ANL or WMC. Furthermore, ANL is derived from MCL and BNL, and a significant positive association was found between BNL and WMC. However, no significant associations were seen between AEP latencies and amplitudes and the demographic variables, MCL, and BNL. The PCA identified two underlying factors: One that contained MCL, BNL, ANL, and WMC and another that contained latency for wave Na and amplitudes for waves V and Na-Pa. Using the variables in the first factor

  14. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  15. Age at implantation and auditory memory in cochlear implanted children.

    Science.gov (United States)

    Mikic, B; Miric, D; Nikolic-Mikic, M; Ostojic, S; Asanovic, M

    2014-05-01

    Early cochlear implantation, before the age of 3 years, provides the best outcome regarding listening, speech, cognition an memory due to maximal central nervous system plasticity. Intensive postoperative training improves not only auditory performance and language, but affects auditory memory as well. The aim of this study was to discover if the age at implantation affects auditory memory function in cochlear implanted children. A total of 50 cochlear implanted children aged 4 to 8 years were enrolled in this study: early implanted (1-3y) n = 27 and late implanted (4-6y) n = 23. Two types of memory tests were used: Immediate Verbal Memory Test and Forward and Backward Digit Span Test. Early implanted children performed better on both verbal and numeric tasks of auditory memory. The difference was statistically significant, especially on the complex tasks. Early cochlear implantation, before the age of 3 years, significantly improve auditory memory and contribute to better cognitive and education outcomes.

  16. The role of auditory feedback in music-supported stroke rehabilitation: A single-blinded randomised controlled intervention.

    Science.gov (United States)

    van Vugt, F T; Kafczyk, T; Kuhn, W; Rollnik, J D; Tillmann, B; Altenmüller, E

    2016-01-01

    Learning to play musical instruments such as piano was previously shown to benefit post-stroke motor rehabilitation. Previous work hypothesised that the mechanism of this rehabilitation is that patients use auditory feedback to correct their movements and therefore show motor learning. We tested this hypothesis by manipulating the auditory feedback timing in a way that should disrupt such error-based learning. We contrasted a patient group undergoing music-supported therapy on a piano that emits sounds immediately (as in previous studies) with a group whose sounds are presented after a jittered delay. The delay was not noticeable to patients. Thirty-four patients in early stroke rehabilitation with moderate motor impairment and no previous musical background learned to play the piano using simple finger exercises and familiar children's songs. Rehabilitation outcome was not impaired in the jitter group relative to the normal group. Conversely, some clinical tests suggests the jitter group outperformed the normal group. Auditory feedback-based motor learning is not the beneficial mechanism of music-supported therapy. Immediate auditory feedback therapy may be suboptimal. Jittered delay may increase efficacy of the proposed therapy and allow patients to fully benefit from motivational factors of music training. Our study shows a novel way to test hypotheses concerning music training in a single-blinded way, which is an important improvement over existing unblinded tests of music interventions.

  17. Diffusion tensor imaging and MR morphometry of the central auditory pathway and auditory cortex in aging.

    Science.gov (United States)

    Profant, O; Škoch, A; Balogová, Z; Tintěra, J; Hlinka, J; Syka, J

    2014-02-28

    Age-related hearing loss (presbycusis) is caused mainly by the hypofunction of the inner ear, but recent findings point also toward a central component of presbycusis. We used MR morphometry and diffusion tensor imaging (DTI) with a 3T MR system with the aim to study the state of the central auditory system in a group of elderly subjects (>65years) with mild presbycusis, in a group of elderly subjects with expressed presbycusis and in young controls. Cortical reconstruction, volumetric segmentation and auditory pathway tractography were performed. Three parameters were evaluated by morphometry: the volume of the gray matter, the surface area of the gyrus and the thickness of the cortex. In all experimental groups the surface area and gray matter volume were larger on the left side in Heschl's gyrus and planum temporale and slightly larger in the gyrus frontalis superior, whereas they were larger on the right side in the primary visual cortex. Almost all of the measured parameters were significantly smaller in the elderly subjects in Heschl's gyrus, planum temporale and gyrus frontalis superior. Aging did not change the side asymmetry (laterality) of the gyri. In the central part of the auditory pathway above the inferior colliculus, a trend toward an effect of aging was present in the axial vector of the diffusion (L1) variable of DTI, with increased values observed in elderly subjects. A trend toward a decrease of L1 on the left side, which was more pronounced in the elderly groups, was observed. The effect of hearing loss was present in subjects with expressed presbycusis as a trend toward an increase of the radial vectors (L2L3) in the white matter under Heschl's gyrus. These results suggest that in addition to peripheral changes, changes in the central part of the auditory system in elderly subjects are also present; however, the extent of hearing loss does not play a significant role in the central changes. Copyright © 2013 IBRO. Published by Elsevier Ltd

  18. Investigation of Auditory Brain Stem Responses (ABRs In Children with Down Syndrome

    Directory of Open Access Journals (Sweden)

    Mohsen Monadi

    2013-04-01

    Full Text Available Objective: The aim of this study was comparing ABR in normal and down children. Materials & Methods: This study was performed between 1388 to 1391 at Akhavan rehabilitation center of University of Social Welfare and Rehabilitation Sciences Tehran and Babol Amir Kola hospital. Forty five 3-6 year-old boy with Down’s syndrome and forty five normal children were selected from available population. After case history, otoscopy and basic hearing tests, ABR test was performed. In ABR absolute latencies, interpeak latencies and amplitude ratio of V/I were analyzed. For analyzing data, parametric independent t test was selected. Results: Latencies and inter-peak latencies of I-III, I-V (P-value<0.001, III-V (P-value=0.01 and V/I amplitude ratio (P-value<0.001 were shorter than normal. Children with Down syndrome had significantly higher threshold than normal children (P-value<0.001. Conclusion: Peripheral auditory system development is delayed and brainstem function in children with Down’s syndrome is abnormal. Early diagnosis of hearing impairments and intervention in these children is very important because it affects communication skills.

  19. The impact of auditory feedback on neuronavigation

    NARCIS (Netherlands)

    Willems, PWA; Noordmans, HJ; van Overbeeke, JJ; Viergever, MA; Tulleken, CAF; van der Sprenkel, JWB

    Object. We aimed to develop an auditory feedback system to be used in addition to regular neuronavigation, in an attempt to improve the usefulness of the information offered by neuronavigation systems. Instrumentation. Using a serial connection, instrument co-ordinates determined by a commercially

  20. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  1. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    Science.gov (United States)

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Impact of an auditory stimulus on baseline cortisol concentrations in clinically normal dogs.

    Science.gov (United States)

    Gin, T E; Puchot, M L; Cook, A K

    2018-03-19

    Baseline cortisol concentrations are routinely used to screen dogs for hypoadrenocorticism (HOC); this diagnosis must then be confirmed with an ACTH stimulation test. A baseline cortisol concentration less than 55 nmol/L (2 μg/dL) is highly sensitive for HOC but lacks specificity, with a false positive rate >20%. Many dogs with nonadrenal disease are therefore subjected to unnecessary additional testing. It was hypothesized that exposure to an unpleasant auditory stimulus before sample collection would improve the specificity of baseline cortisol measurements in dogs with nonadrenal disease by triggering cortisol production. Twenty-eight healthy client-owned dogs were included in the study, with a median age of 4 yr (range 2-9 yr) and a median weight of 20 kg (range 10-27 kg). Dogs were ineligible for inclusion if they had received short- or long-acting glucocorticoids within the previous 30 and 90 d, respectively. Dogs were randomly assigned to group 1 (control; no noise; n = 7), group 2 (brief noise: n = 10), or group 3 (long noise: n = 11). Each dog and owner were directed to a secluded area for approximately 15 min. Group 1 sat in relative quiet, exposed only to the background sounds of a veterinary hospital. Group 2 were exposed to the sound of a wet-dry vacuum in an adjacent hallway during the first 3 min of this period. Group 3 were exposed to random bursts of wet-dry vacuum noise during this period. At the end of the test interval, each dog was escorted to an adjacent examination room for blood collection. Samples were processed within 15 min; serum was frozen at -80°C before measurement of cortisol concentrations. Median serum cortisol concentrations and the proportion of dogs with results dogs with apparently normal adrenal function was therefore rejected. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Evaluation of peripheral auditory pathways and brainstem in obstructive sleep apnea

    Directory of Open Access Journals (Sweden)

    Erika Matsumura

    Full Text Available Abstract Introduction Obstructive sleep apnea causes changes in normal sleep architecture, fragmenting it chronically with intermittent hypoxia, leading to serious health consequences in the long term. It is believed that the occurrence of respiratory events during sleep, such as apnea and hypopnea, can impair the transmission of nerve impulses along the auditory pathway that are highly dependent on the supply of oxygen. However, this association is not well established in the literature. Objective To compare the evaluation of peripheral auditory pathway and brainstem among individuals with and without obstructive sleep apnea. Methods The sample consisted of 38 adult males, mean age of 35.8 (±7.2, divided into four groups matched for age and Body Mass Index. The groups were classified based on polysomnography in: control (n = 10, mild obstructive sleep apnea (n = 11 moderate obstructive sleep apnea (n = 8 and severe obstructive sleep apnea (n = 9. All study subjects denied a history of risk for hearing loss and underwent audiometry, tympanometry, acoustic reflex and Brainstem Auditory Evoked Response. Statistical analyses were performed using three-factor ANOVA, 2-factor ANOVA, chi-square test, and Fisher's exact test. The significance level for all tests was 5%. Results There was no difference between the groups for hearing thresholds, tympanometry and evaluated Brainstem Auditory Evoked Response parameters. An association was observed between the presence of obstructive sleep apnea and changes in absolute latency of wave V (p = 0.03. There was an association between moderate obstructive sleep apnea and change of the latency of wave V (p = 0.01. Conclusion The presence of obstructive sleep apnea is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem. The increase in obstructive sleep apnea severity does not promote worsening of responses assessed by audiometry, tympanometry and Brainstem

  4. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  5. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  6. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  7. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  8. Episodic epileptic verbal auditory agnosia in Landau Kleffner syndrome treated with combination diazepam and corticosteroids.

    Science.gov (United States)

    Devinsky, Orrin; Goldberg, Rina; Miles, Daniel; Bojko, Aviva; Riviello, James

    2014-10-01

    We report 2 pediatric patients who presented initially with seizures followed by subacute language regression characterized by a verbal auditory agnosia. These previously normal children had no evidence of expressive aphasia during their symptomatic periods. Further, in both cases, auditory agnosia was associated with sleep-activated electroencephalographic (EEG) epileptiform activity, consistent with Landau-Kleffner syndrome. However, both cases are unique since the episodic auditory agnosia and sleep-activated EEG epileptiform activity rapidly responded to combination therapy with pulse benzodiazepine and corticosteroids. Further, in each case, recurrences were characterized by similar symptoms, EEG findings, and beneficial responses to the pulse benzodiazepine and corticosteroid therapy. These observations suggest that pulse combination high-dose corticosteroid and benzodiazepine therapy may be especially effective in Landau-Kleffner syndrome. © The Author(s) 2014.

  9. Amelioration of Auditory Response by DA9801 in Diabetic Mouse

    Directory of Open Access Journals (Sweden)

    Yeong Ro Lee

    2015-01-01

    Full Text Available Diabetes mellitus (DM is a metabolic disease that involves disorders such as diabetic retinopathy, diabetic neuropathy, and diabetic hearing loss. Recently, neurotrophin has become a treatment target that has shown to be an attractive alternative in recovering auditory function altered by DM. The aim of this study was to evaluate the effect of DA9801, a mixture of Dioscorea nipponica and Dioscorea japonica extracts, in the auditory function damage produced in a STZ-induced diabetic model and to provide evidence of the mechanisms involved in enhancing these protective effects. We found a potential application of DA9801 on hearing impairment in the STZ-induced diabetic model, demonstrated by reducing the deterioration produced by DM in ABR threshold in response to clicks and normalizing wave I–IV latencies and Pa latencies in AMLR. We also show evidence that these effects might be elicited by inducing NGF related through Nr3c1 and Akt. Therefore, this result suggests that the neuroprotective effects of DA9801 on the auditory damage produced by DM may be affected by NGF increase resulting from Nr3c1 via Akt transformation.

  10. Word learning in deaf children with cochlear implants: effects of early auditory experience.

    Science.gov (United States)

    Houston, Derek M; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T

    2012-05-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. © 2012 Blackwell Publishing Ltd.

  11. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  12. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  13. Formant compensation for auditory feedback with English vowels

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen N; Munhall, Kevin G

    2015-01-01

    Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word...... "head." Consistent behavioral observations have been reported, and there is lively discussion as to how the production system integrates auditory versus somatosensory feedback to control vowel production. However, different vowels have different oral sensation and proprioceptive information due...... to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see...

  14. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations

    NARCIS (Netherlands)

    Curcic-Blake, Branislava; Ford, Judith M.; Hubl, Daniela; Orlov, Natasza D.; Sommer, Iris E.; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W.; David, Olivier; Mulert, Christoph; Woodward, Todd S.; Aleman, Andre

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of

  15. Effects of background music on objective and subjective performance measures in an auditory BCI

    Directory of Open Access Journals (Sweden)

    Sijie Zhou

    2016-10-01

    Full Text Available Several studies have explored brain computer interface (BCI systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  16. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children.

    Science.gov (United States)

    Badcock, Nicholas A; Preece, Kathryn A; de Wit, Bianca; Glenn, Katharine; Fieder, Nora; Thie, Johnson; McArthur, Genevieve

    2015-01-01

    Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG) system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs) in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children. Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under "passive" and "active" listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz) and 100 deviant (1,200 Hz) tones. In the active condition, they listened to the same stimuli, and were asked to count the number of 'high' (i.e., deviant) tones. Results. Intraclass correlations (ICCs) indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95) in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74). There were few differences between peak amplitude and latency estimates for the two systems. Conclusions. An adapted EPOC EEG system can be used to index children's late auditory ERP peaks (i.e., P1, N1, P2, N2, P3) and their MMN ERP component.

  17. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children

    Directory of Open Access Journals (Sweden)

    Nicholas A. Badcock

    2015-04-01

    Full Text Available Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children.Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under “passive” and “active” listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz and 100 deviant (1,200 Hz tones. In the active condition, they listened to the same stimuli, and were asked to count the number of ‘high’ (i.e., deviant tones.Results. Intraclass correlations (ICCs indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95 in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74. There were few differences between peak amplitude and latency estimates for the two systems.Conclusions. An adapted EPOC EEG system can be used to index children’s late auditory ERP peaks (i.e., P1, N1, P2, N2, P3 and their MMN ERP component.

  18. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    Science.gov (United States)

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  19. Effect of background music on auditory-verbal memory performance

    OpenAIRE

    Sona Matloubi; Ali Mohammadzadeh; Zahra Jafari; Alireza Akbarzade Baghban

    2014-01-01

    Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female) with normal hearing, aged betw...

  20. Effects of auditory stimulation with music of different intensities on heart period

    Directory of Open Access Journals (Sweden)

    Joice A.T. do Amaral

    2016-01-01

    Full Text Available Various studies have indicated that music therapy with relaxant music improves cardiac function of patients treated with cardiotoxic medication and heavy-metal music acutely reduces heart rate variability (HRV. There is also evidence that white noise auditory stimulation above 50 dB causes cardiac autonomic responses. In this study, we aimed to evaluate the acute effects of musical auditory stimulation with different intensities on cardiac autonomic regulation. This study was performed on 24 healthy women between 18 and 25 years of age. We analyzed HRV in the time [standard deviation of normal-to-normal RR intervals (SDNN, percentage of adjacent RR intervals with a difference of duration >50 ms (pNN50, and root-mean square of differences between adjacent normal RR intervals in a time interval (RMSSD] and frequency [low frequency (LF, high frequency (HF, and LF/HF ratio] domains. HRV was recorded at rest for 10 minutes. Subsequently, the volunteers were exposed to baroque or heavy-metal music for 5 minutes through an earphone. The volunteers were exposed to three equivalent sound levels (60–70, 70–80, and 80–90 dB. After the first baroque or heavy-metal music, they remained at rest for 5 minutes and then they were exposed to the other music. The sequence of songs was randomized for each individual. Heavy-metal musical auditory stimulation at 80–90 dB reduced the SDNN index compared with control (44.39 ± 14.40 ms vs. 34.88 ± 8.69 ms, and stimulation at 60–70 dB decreased the LF (ms2 index compared with control (668.83 ± 648.74 ms2 vs. 392.5 ± 179.94 ms2. Baroque music at 60–70 dB reduced the LF (ms2 index (587.75 ± 318.44 ms2 vs. 376.21 ± 178.85 ms2. In conclusion, heavy-metal and baroque musical auditory stimulation at lower intensities acutely reduced global modulation of the heart and only heavy-metal music reduced HRV at higher intensities.

  1. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study.

    Science.gov (United States)

    Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei

    2012-06-09

    Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision

  2. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study

    Directory of Open Access Journals (Sweden)

    Gonzalez Jose

    2012-06-01

    Full Text Available Abstract Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old, participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF, Visual Feedback only control (VF, and Audiovisual Feedback control (AVF. For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA, and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback. Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance

  3. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  5. Auditory Steady-State Response Thresholds in Adults With Conductive and Mild to Moderate Sensorineural Hearing Loss

    OpenAIRE

    Hosseinabadi, Reza; Jafarzadeh, Sadegh

    2014-01-01

    Background: The Auditory steady state response (ASSR) provides a frequency-specific and automatic assessment of hearing sensitivity and is used in infants and difficult-to-test adults. Objectives: The aim of this study was to compare the ASSR thresholds among various types (normal, conductive, and sensorineural), degree (normal, mild, and moderate), and configuration (flat and sloping) of hearing sensitivity, and measuring the cutoff point between normal condition and hearing loss for differe...

  6. MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM

    DEFF Research Database (Denmark)

    Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.

    2007-01-01

    An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...... was tested in conditions of tone-in-noise masking, intensity discrimination, spectral masking with tones and narrowband noises, forward masking with (on- and off-frequency) noise- and pure-tone maskers, and amplitude modulation detection using different noise carrier bandwidths. One of the key properties...

  7. Acoustic Trauma Changes the Parvalbumin-Positive Neurons in Rat Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Congli Liu

    2018-01-01

    Full Text Available Acoustic trauma is being reported to damage the auditory periphery and central system, and the compromised cortical inhibition is involved in auditory disorders, such as hyperacusis and tinnitus. Parvalbumin-containing neurons (PV neurons, a subset of GABAergic neurons, greatly shape and synchronize neural network activities. However, the change of PV neurons following acoustic trauma remains to be elucidated. The present study investigated how auditory cortical PV neurons change following unilateral 1 hour noise exposure (left ear, one octave band noise centered at 16 kHz, 116 dB SPL. Noise exposure elevated the auditory brainstem response threshold of the exposed ear when examined 7 days later. More detectable PV neurons were observed in both sides of the auditory cortex of noise-exposed rats when compared to control. The detectable PV neurons of the left auditory cortex (ipsilateral to the exposed ear to noise exposure outnumbered those of the right auditory cortex (contralateral to the exposed ear. Quantification of Western blotted bands revealed higher expression level of PV protein in the left cortex. These findings of more active PV neurons in noise-exposed rats suggested that a compensatory mechanism might be initiated to maintain a stable state of the brain.

  8. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input.

    Directory of Open Access Journals (Sweden)

    Max F K Happel

    Full Text Available Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex.

  9. Auditory white noise reduces postural fluctuations even in the absence of vision.

    Science.gov (United States)

    Ross, Jessica Marie; Balasubramaniam, Ramesh

    2015-08-01

    The contributions of somatosensory, vestibular, and visual feedback to balance control are well documented, but the influence of auditory information, especially acoustic noise, on balance is less clear. Because somatosensory noise has been shown to reduce postural sway, we hypothesized that noise from the auditory modality might have a similar effect. Given that the nervous system uses noise to optimize signal transfer, adding mechanical or auditory noise should lead to increased feedback about sensory frames of reference used in balance control. In the present experiment, postural sway was analyzed in healthy young adults where they were presented with continuous white noise, in the presence and absence of visual information. Our results show reduced postural sway variability (as indexed by the body's center of pressure) in the presence of auditory noise, even when visual information was not present. Nonlinear time series analysis revealed that auditory noise has an additive effect, independent of vision, on postural stability. Further analysis revealed that auditory noise reduced postural sway variability in both low- and high-frequency regimes (> or noise. Our results support the idea that auditory white noise reduces postural sway, suggesting that auditory noise might be used for therapeutic and rehabilitation purposes in older individuals and those with balance disorders.

  10. Normal people working in normal organizations with normal equipment: system safety and cognition in a mid-air collision.

    Science.gov (United States)

    de Carvalho, Paulo Victor Rodrigues; Gomes, José Orlando; Huber, Gilbert Jacob; Vidal, Mario Cesar

    2009-05-01

    A fundamental challenge in improving the safety of complex systems is to understand how accidents emerge in normal working situations, with equipment functioning normally in normally structured organizations. We present a field study of the en route mid-air collision between a commercial carrier and an executive jet, in the clear afternoon Amazon sky in which 154 people lost their lives, that illustrates one response to this challenge. Our focus was on how and why the several safety barriers of a well structured air traffic system melted down enabling the occurrence of this tragedy, without any catastrophic component failure, and in a situation where everything was functioning normally. We identify strong consistencies and feedbacks regarding factors of system day-to-day functioning that made monitoring and awareness difficult, and the cognitive strategies that operators have developed to deal with overall system behavior. These findings emphasize the active problem-solving behavior needed in air traffic control work, and highlight how the day-to-day functioning of the system can jeopardize such behavior. An immediate consequence is that safety managers and engineers should review their traditional safety approach and accident models based on equipment failure probability, linear combinations of failures, rules and procedures, and human errors, to deal with complex patterns of coincidence possibilities, unexpected links, resonance among system functions and activities, and system cognition.

  11. Inhalation of Hydrocarbon Jet Fuel Suppress Central Auditory Nervous System Function.

    Science.gov (United States)

    Guthrie, O'neil W; Wong, Brian A; McInturf, Shawn M; Reboulet, James E; Ortiz, Pedro A; Mattie, David R

    2015-01-01

    More than 800 million L/d of hydrocarbon fuels is used to power cars, boats, and jet airplanes. The weekly consumption of these fuels necessarily puts the public at risk for repeated inhalation exposure. Recent studies showed that exposure to hydrocarbon jet fuel produces lethality in presynaptic sensory cells, leading to hearing loss, especially in the presence of noise. However, the effects of hydrocarbon jet fuel on the central auditory nervous system (CANS) have not received much attention. It is important to investigate the effects of hydrocarbons on the CANS in order to complete current knowledge regarding the ototoxic profile of such exposures. The objective of the current study was to determine whether inhalation exposure to hydrocarbon jet fuel might affect the functions of the CANS. Male Fischer 344 rats were randomly divided into four groups (control, noise, fuel, and fuel + noise). The structural and functional integrity of presynaptic sensory cells was determined in each group. Neurotransmission in both peripheral and central auditory pathways was simultaneously evaluated in order to identify and differentiate between peripheral and central dysfunctions. There were no detectable effects on pre- and postsynaptic peripheral functions. However, the responsiveness of the brain was significantly depressed and neural transmission time was markedly delayed. The development of CANS dysfunctions in the general public and the military due to cumulative exposure to hydrocarbon fuels may represent a significant but currently unrecognized public health issue.

  12. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  13. Superior pre-attentive auditory processing in musicians.

    Science.gov (United States)

    Koelsch, S; Schröger, E; Tervaniemi, M

    1999-04-26

    The present study focuses on influences of long-term experience on auditory processing, providing the first evidence for pre-attentively superior auditory processing in musicians. This was revealed by the brain's automatic change-detection response, which is reflected electrically as the mismatch negativity (MMN) and generated by the operation of sensoric (echoic) memory, the earliest cognitive memory system. Major chords and single tones were presented to both professional violinists and non-musicians under ignore and attend conditions. Slightly impure chords, presented among perfect major chords elicited a distinct MMN in professional musicians, but not in non-musicians. This demonstrates that compared to non-musicians, musicians are superior in pre-attentively extracting more information out of musically relevant stimuli. Since effects of long-term experience on pre-attentive auditory processing have so far been reported for language-specific phonemes only, results indicate that sensory memory mechanisms can be modulated by training on a more general level.

  14. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  15. HIV/AIDS and auditory function in adults: the need for intensified ...

    African Journals Online (AJOL)

    It begins with an introduction to the effects of HIV disease and treatment on the auditory system, and so highlights the need to put auditory function in adults with HIV or AIDS on the healthcare and research agenda in developing countries. The discussion refers to this population in regard to: published prevalence and ...

  16. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    Science.gov (United States)

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  17. Contralateral white noise attenuates 40-Hz auditory steady-state fields but not N100m in auditory evoked fields.

    Science.gov (United States)

    Kawase, Tetsuaki; Maki, Atsuko; Kanno, Akitake; Nakasato, Nobukazu; Sato, Mika; Kobayashi, Toshimitsu

    2012-01-16

    The different response characteristics of the different auditory cortical responses under conventional central masking conditions were examined by comparing the effects of contralateral white noise on the cortical component of 40-Hz auditory steady state fields (ASSFs) and the N100 m component in auditory evoked fields (AEFs) for tone bursts using a helmet-shaped magnetoencephalography system in 8 healthy volunteers (7 males, mean age 32.6 years). The ASSFs were elicited by monaural 1000 Hz amplitude modulation tones at 80 dB SPL, with the amplitude modulated at 39 Hz. The AEFs were elicited by monaural 1000 Hz tone bursts of 60 ms duration (rise and fall times of 10 ms, plateau time of 40 ms) at 80 dB SPL. The results indicated that continuous white noise at 70 dB SPL presented to the contralateral ear did not suppress the N100 m response in either hemisphere, but significantly reduced the amplitude of the 40-Hz ASSF in both hemispheres with asymmetry in that suppression of the 40-Hz ASSF was greater in the right hemisphere. Different effects of contralateral white noise on these two responses may reflect different functional auditory processes in the cortices. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Test of a motor theory of long-term auditory memory.

    Science.gov (United States)

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-05-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.

  19. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2018-02-15

    The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In

  20. The Relationship between Types of Attention and Auditory Processing Skills: Reconsidering Auditory Processing Disorder Diagnosis

    Science.gov (United States)

    Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva

    2018-01-01

    Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both

  1. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  2. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  3. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  4. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  5. Polarity-Specific Transcranial Direct Current Stimulation Disrupts Auditory Pitch Learning

    Directory of Open Access Journals (Sweden)

    Reiko eMatsushita

    2015-05-01

    Full Text Available Transcranial direct current stimulation (tDCS is attracting increasing interest because of its potential for therapeutic use. While its effects have been investigated mainly with motor and visual tasks, less is known in the auditory domain. Past tDCS studies with auditory tasks demonstrated various behavioural outcomes, possibly due to differences in stimulation parameters or task measurements used in each study. Further research using well-validated tasks are therefore required for clarification of behavioural effects of tDCS on the auditory system. Here, we took advantage of findings from a prior functional magnetic resonance imaging study, which demonstrated that the right auditory cortex is modulated during fine-grained pitch learning of microtonal melodic patterns. Targeting the right auditory cortex with tDCS using this same task thus allowed us to test the hypothesis that this region is causally involved in pitch learning. Participants in the current study were trained for three days while we measured pitch discrimination thresholds using microtonal melodies on each day using a psychophysical staircase procedure. We administered anodal, cathodal, or sham tDCS to three groups of participants over the right auditory cortex on the second day of training during performance of the task. Both the sham and the cathodal groups showed the expected significant learning effect (decreased pitch threshold over the three days of training; in contrast we observed a blocking effect of anodal tDCS on auditory pitch learning, such that this group showed no significant change in thresholds over the three days. The results support a causal role for the right auditory cortex in pitch discrimination learning.

  6. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  7. Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.

    Science.gov (United States)

    Hazell, J W; Jastreboff, P J

    1990-02-01

    A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.

  8. Music training for the development of auditory skills.

    Science.gov (United States)

    Kraus, Nina; Chandrasekaran, Bharath

    2010-08-01

    The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.

  9. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    Science.gov (United States)

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Auditory orientation in crickets: Pattern recognition controls reactive steering

    Science.gov (United States)

    Poulet, James F. A.; Hedwig, Berthold

    2005-10-01

    Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis

  11. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    International Nuclear Information System (INIS)

    Nakamura, Miyako

    1988-01-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone. (author)

  12. Assessment of children with suspected auditory processing disorder: a factor analysis study.

    Science.gov (United States)

    Ahmmed, Ansar U; Ahmmed, Afsara A; Bath, Julie R; Ferguson, Melanie A; Plack, Christopher J; Moore, David R

    2014-01-01

    To identify the factors that may underlie the deficits in children with listening difficulties, despite normal pure-tone audiograms. These children may have auditory processing disorder (APD), but there is no universally agreed consensus as to what constitutes APD. The authors therefore refer to these children as children with suspected APD (susAPD) and aim to clarify the role of attention, cognition, memory, sensorimotor processing speed, speech, and nonspeech auditory processing in susAPD. It was expected that a factor analysis would show how nonauditory and supramodal factors relate to auditory behavioral measures in such children with susAPD. This would facilitate greater understanding of the nature of listening difficulties, thus further helping with characterizing APD and designing multimodal test batteries to diagnose APD. Factor analysis of outcomes from 110 children (68 male, 42 female; aged 6 to 11 years) with susAPD on a widely used clinical test battery (SCAN-C) and a research test battery (MRC Institute of Hearing Research Multi-center Auditory Processing "IMAP"), that have age-based normative data. The IMAP included backward masking, simultaneous masking, frequency discrimination, nonverbal intelligence, working memory, reading, alerting attention and motor reaction times to auditory and visual stimuli. SCAN-C included monaural low-redundancy speech (auditory closure and speech in noise) and dichotic listening tests (competing words and competing sentences) that assess divided auditory attention and hence executive attention. Three factors were extracted: "general auditory processing," "working memory and executive attention," and "processing speed and alerting attention." Frequency discrimination, backward masking, simultaneous masking, and monaural low-redundancy speech tests represented the "general auditory processing" factor. Dichotic listening and the IMAP cognitive tests (apart from nonverbal intelligence) were represented in the "working

  13. PENGARUH MODEL PEMBELAJARAN AUDITORY, INTELLECTUALLY, AND REPETITION TERHADAP KEMAMPUAN PEMAHAMAN KONSEP DI SMP PUSTEK SERPONG

    Directory of Open Access Journals (Sweden)

    Selviani Fitri

    2016-09-01

    Full Text Available This research aims to know the effect of learning model of Auditory, Intellectually, and Repetition (AIR about cube concept comprehension to students' VIII grade at SMP Pustek Serpong. Kind of the research is experiment quasi research. The research instrument that used is analysis question pretest and postest. The data analyze by using normality test, homogenity test, and t-test. The result of the research show that there is the different ability of cube concept comprehension among students' who gave air learning model with convensional learning model. Keywords: Model Pembelajaran Auditory Intellectually and Repetition (AIR, concept comprehension

  14. A European Perspective on Auditory Processing Disorder-Current Knowledge and Future Research Focus

    Directory of Open Access Journals (Sweden)

    Vasiliki (Vivian Iliadou

    2017-11-01

    Full Text Available Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.

  15. Biomimetic Sonar for Electrical Activation of the Auditory Pathway

    Directory of Open Access Journals (Sweden)

    D. Menniti

    2017-01-01

    Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.

  16. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  17. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Directory of Open Access Journals (Sweden)

    Jason A Miranda

    Full Text Available Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  18. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Science.gov (United States)

    Miranda, Jason A; Shepard, Kathryn N; McClintock, Shannon K; Liu, Robert C

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  19. The cochlear nerve canal and internal auditory canal in children with normal cochlea but cochlear nerve deficiency

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Fei; Li, Jianhong; Xian, Junfang; Wang, Zhenchang [Dept. of Radiology, Beijing Tongren Hospital, Capital Medical Univ., Beijing (China)], e-mail: cjr.wzhch@vip.163.com; Mo, Lingyan [Dept. of Otorhinolaryngology, Beijing Tongren Hospital, Capital Medical Univ., Beijing (China)

    2013-04-15

    Background: There is an increasing frequency of requests for cochlear implantation (CI) in deaf children and more detailed image information is necessary for selecting appropriate candidates. Cochlear nerve deficiency (CND) is a contraindication to CI. Magnetic resonance imaging (MRI) has been used to evaluate the integrity of the cochlear nerve. The abnormalities of the cochlear nerve canal (CNC) and internal auditory canal (IAC) have been reported to be associated with CND. Purpose: To correlate CNC manifestation, size, and IAC diameter on high-resolution CT (HRCT) with CND diagnosed by MRI in children. Material and Methods: HRCT images from 35 sensorineurally deaf children who had normal cochlea but bilateral or unilateral CND diagnosed by MRI were studied retrospectively. The CNC and IAC manifestation and size were assessed and correlated with CND. Results: CND was diagnosed by MRI in 54/70 ears (77.1%). Thirty-two ears had an absent cochlear nerve (59.3%), while 22 ears had a small cochlear nerve (40.7%). The CNC diameter was <1.5 mm in 36 ears (66.7%). The CNC diameter ranged between 1.5 and 2.0 mm in seven ears (13.0%) and was >2.0 mm in 11 ears (20.4%). The IAC diameter was <3.0 mm in 25 ears (46.3%) and >3.0 mm in 29 ears (53.7%). Conclusion: The hypoplastic CNC might be more highly indicative of CND than that of a narrow IAC.

  20. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  1. Plasticity in the Primary Auditory Cortex, Not What You Think it is: Implications for Basic and Clinical Auditory Neuroscience

    Science.gov (United States)

    Weinberger, Norman M.

    2013-01-01

    Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375

  2. Kölliker’s Organ and the Development of Spontaneous Activity in the Auditory System: Implications for Hearing Dysfunction

    Directory of Open Access Journals (Sweden)

    M. W. Nishani Dayaratne

    2014-01-01

    Full Text Available Prior to the “onset of hearing,” developing cochlear inner hair cells (IHCs and primary auditory neurons undergo experience-independent activity, which is thought to be important in retaining and refining neural connections in the absence of sound. One of the major hypotheses regarding the origin of such activity involves a group of columnar epithelial supporting cells forming Kölliker’s organ, which is only present during this critical period of auditory development. There is strong evidence for a purinergic signalling mechanism underlying such activity. ATP released through connexin hemichannels may activate P2 purinergic receptors in both Kölliker’s organ and the adjacent IHCs, leading to generation of electrical activity throughout the auditory system. However, recent work has suggested an alternative origin, by demonstrating the ability of IHCs to generate this spontaneous activity without activation by ATP. Regardless, developmental abnormalities of Kölliker’s organ may lead to congenital hearing loss, considering that mutations in ion channels (hemichannels, gap junctions, and calcium channels involved in Kölliker’s organ activity share strong links with such types of deafness.

  3. Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.

    Science.gov (United States)

    Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta

    2009-01-01

    In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.

  4. Medial Auditory Thalamus Is Necessary for Acquisition and Retention of Eyeblink Conditioning to Cochlear Nucleus Stimulation

    Science.gov (United States)

    Halverson, Hunter E.; Poremba, Amy; Freeman, John H.

    2015-01-01

    Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…

  5. Evaluation of peripheral auditory pathways and brainstem in obstructive sleep apnea.

    Science.gov (United States)

    Matsumura, Erika; Matas, Carla Gentile; Magliaro, Fernanda Cristina Leite; Pedreño, Raquel Meirelles; Lorenzi-Filho, Geraldo; Sanches, Seisse Gabriela Gandolfi; Carvallo, Renata Mota Mamede

    2016-11-25

    Obstructive sleep apnea causes changes in normal sleep architecture, fragmenting it chronically with intermittent hypoxia, leading to serious health consequences in the long term. It is believed that the occurrence of respiratory events during sleep, such as apnea and hypopnea, can impair the transmission of nerve impulses along the auditory pathway that are highly dependent on the supply of oxygen. However, this association is not well established in the literature. To compare the evaluation of peripheral auditory pathway and brainstem among individuals with and without obstructive sleep apnea. The sample consisted of 38 adult males, mean age of 35.8 (±7.2), divided into four groups matched for age and Body Mass Index. The groups were classified based on polysomnography in: control (n=10), mild obstructive sleep apnea (n=11) moderate obstructive sleep apnea (n=8) and severe obstructive sleep apnea (n=9). All study subjects denied a history of risk for hearing loss and underwent audiometry, tympanometry, acoustic reflex and Brainstem Auditory Evoked Response. Statistical analyses were performed using three-factor ANOVA, 2-factor ANOVA, chi-square test, and Fisher's exact test. The significance level for all tests was 5%. There was no difference between the groups for hearing thresholds, tympanometry and evaluated Brainstem Auditory Evoked Response parameters. An association was observed between the presence of obstructive sleep apnea and changes in absolute latency of wave V (p=0.03). There was an association between moderate obstructive sleep apnea and change of the latency of wave V (p=0.01). The presence of obstructive sleep apnea is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem. The increase in obstructive sleep apnea severity does not promote worsening of responses assessed by audiometry, tympanometry and Brainstem Auditory Evoked Response. Copyright © 2016 Associação Brasileira de

  6. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  7. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  8. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  9. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Science.gov (United States)

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  10. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Directory of Open Access Journals (Sweden)

    Martina Wengenroth

    Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  11. Normal form and synchronization of strict-feedback chaotic systems

    International Nuclear Information System (INIS)

    Wang, Feng; Chen, Shihua; Yu Minghai; Wang Changping

    2004-01-01

    This study concerns the normal form and synchronization of strict-feedback chaotic systems. We prove that, any strict-feedback chaotic system can be rendered into a normal form with a invertible transform and then a design procedure to synchronize the normal form of a non-autonomous strict-feedback chaotic system is presented. This approach needs only a scalar driving signal to realize synchronization no matter how many dimensions the chaotic system contains. Furthermore, the Roessler chaotic system is taken as a concrete example to illustrate the procedure of designing without transforming a strict-feedback chaotic system into its normal form. Numerical simulations are also provided to show the effectiveness and feasibility of the developed methods

  12. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...... of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...

  13. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  15. Abnormalities in auditory evoked potentials of 75 patients with Arnold-Chiari malformations types I and II

    Directory of Open Access Journals (Sweden)

    Henriques Filho Paulo Sergio A.

    2006-01-01

    Full Text Available OBJECTIVE: To evaluate the frequency and degree of severity of abnormalities in the auditory pathways in patients with Chiari malformations type I and II. METHOD: This is a series-of-case descriptive study in which the possible presence of auditory pathways abnormalities in 75 patients (48 children and 27 adults with Chiari malformation types I and II were analyzed by means of auditory evoked potentials evaluation. The analysis was based on the determination of intervals among potentials peak values, absolute latency and amplitude ratio among potentials V and I. RESULTS: Among the 75 patients studied, 27 (36% disclosed Arnold-Chiari malformations type I and 48 (64% showed Arnold-Chiari malformations type II. Fifty-three (71% of these patients showed some degree of auditory evoked potential abnormalities. Tests were normal in the remaining 22 (29% patients. CONCLUSION: Auditory evoked potentials testing can be considered a valuable instrument for diagnosis and evaluation of brain stem functional abnormalities in patients with Arnold-Chiari malformations type I and II. The determination of the presence and degree of severity of these abnormalities can be contributory to the prevention of further handicaps in these patients either through physical therapy or by means of precocious corrective surgical intervention.

  16. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the

  17. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  18. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  19. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  20. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  1. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  2. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder.

    Science.gov (United States)

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-03-01

    This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  3. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  4. A Study of the Central Auditory Function in Stutters by Masking Level Difference and Synthetic Sentence Identification Tests

    Directory of Open Access Journals (Sweden)

    Afsaneh Rajab

    2007-06-01

    Full Text Available Background and Aim: There are evidences that indicate a relationship between auditory processing disor¬ders and stuttering,¬ and any disorder in the central auditory function can be at least one of the underly¬ing causes of stuttering. Even though, using the most state of the art radiographic technologies, i.e. MRI, no definitive answer has been given in relative to this question. In this research, using Mask-ing Level Difference (MLD and Synthetic Sentence Identification (SSI tests, the central auditory func¬tion of stutters and normal group was evaluated.Materials and Methods: In this study was analytic cross-sectional, fifteen male patients with stutter-ing and 15 male normal cases with the age range from 16 to 40 years (average age 26.78 year were evalu¬ated. SSI-ICM, SSI-CCM and MLD tests were performed. The results were compared in both groups.Results: Although stutterers mean MLD was less than that of normal group, the different was not signifi¬cant between stutters and normal group in SSI test in right ear at negative MCRs. There was a signifi¬cant difference in ICM state, but in CCM state, there was no significant difference between the aver¬age score of two groups in various MCRs.Conclusion: The findings of this research is compatible with those of similar researches about the SSI test and the pattern of results, probably indicates a partial dysfunction of brainstem in some of the stutters.

  5. Acquisition, Analyses and Interpretation of fMRI Data: A Study on the Effective Connectivity in Human Primary Auditory Cortices

    International Nuclear Information System (INIS)

    Ahmad Nazlim Yusoff; Mazlyfarina Mohamad; Khairiah Abdul Hamid

    2011-01-01

    A study on the effective connectivity characteristics in auditory cortices was conducted on five healthy Malay male subjects with the age of 20 to 40 years old using functional magnetic resonance imaging (fMRI), statistical parametric mapping (SPM5) and dynamic causal modelling (DCM). A silent imaging paradigm was used to reduce the scanner sound artefacts on functional images. The subjects were instructed to pay attention to the white noise stimulus binaurally given at intensity level of 70 dB higher than the hearing level for normal people. Functional specialisation was studied using Matlab-based SPM5 software by means of fixed effects (FFX), random effects (RFX) and conjunction analyses. Individual analyses on all subjects indicate asymmetrical bilateral activation between the left and right auditory cortices in Brodmann areas (BA)22, 41 and 42 involving the primary and secondary auditory cortices. The three auditory areas in the right and left auditory cortices are selected for the determination of the effective connectivity by constructing 9 network models. The effective connectivity is determined on four out of five subjects with the exception of one subject who has the BA22 coordinates located too far from BA22 coordinates obtained from group analysis. DCM results showed the existence of effective connectivity between the three selected auditory areas in both auditory cortices. In the right auditory cortex, BA42 is identified as input centre with unidirectional parallel effective connectivities of BA42→BA41and BA42→BA22. However, for the left auditory cortex, the input is BA41 with unidirectional parallel effective connectivities of BA41→BA42 and BA41→BA22. The connectivity between the activated auditory areas suggests the existence of signal pathway in the auditory cortices even when the subject is listening to noise. (author)

  6. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners with Normal Hearing

    Science.gov (United States)

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…

  7. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  9. Association of blood antioxidants status with visual and auditory sustained attention.

    Science.gov (United States)

    Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham

    2015-01-01

    A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P attention (P visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.

  10. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Gender Difference in TEOAEs and Contralateral Suppression of TEOAEs in Normal Hearing Adults

    Directory of Open Access Journals (Sweden)

    Farzaneh Zamiri Abdollahi

    2011-10-01

    Full Text Available Objectives: Otoacoustic emissions (OAEs are sounds that originate in cochlea and are measured in external auditory canal and provide a simple, efficient and non-invasive objective indicator of healthy cochlear function. Olivo cochlear bundle (OCB or auditory efferent system is a neural feedback pathway which originated from brain stem and terminated in the inner ear and can be evaluated non-invasively by applying a contralateral acoustic stimulus and simultaneously measuring reduction of OAEs amplitude. In this study gender differences in TEOAE amplitude and suppression of TEOAE were investigated. Methods: This study was performed at Akhavan rehabilitation centre belonging to the University of Social welfare and rehabilitation sciences, Tehran, Iran in 2011. 60 young adults (30 female and 30 male between 21 and 27 years old (mean=24 years old, SD=1.661 with normal hearing criteria were selected. Right ear of all cases were tested to neutralize side effect if there is any. Results: According to Independent t-test, TEOAE amplitude was significantly greater in females with mean value of 24.98 dB (P<0.001 and TEOAE suppression was significantly greater in males with mean value of 2.07 dB (P<0.001. Discussion: This study shows that there is a significant gender difference in adult’s TEOAE (cochlear mechanisms and TEOAE suppression (auditory efferent system. The exact reason for these results is not clear. According to this study different norms for males and females might be necessary.

  12. Auditory motion-specific mechanisms in the primate brain.

    Directory of Open Access Journals (Sweden)

    Colline Poirier

    2017-05-01

    Full Text Available This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI. We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.

  13. The effect of compression on tuning estimates in a simple nonlinear auditory filter model

    DEFF Research Database (Denmark)

    Marschall, Marton; MacDonald, Ewen; Dau, Torsten

    2013-01-01

    Behavioral experiments using auditory masking have been used to characterize frequency selectivity, one of the basic properties of the auditory system. However, due to the nonlinear response of the basilar membrane, the interpretation of these experiments may not be straightforward. Specifically,...

  14. Effects of emotionally charged auditory stimulation on gait performance in the elderly: a preliminary study.

    Science.gov (United States)

    Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe

    2015-04-01

    To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; Pactivities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the

  15. Auditory processing in dysphonic children Processamento auditivo em crianças disfônicas

    Directory of Open Access Journals (Sweden)

    Mirian Aratangy Arnaut

    2011-06-01

    Full Text Available Contemporary cross-sectional cohort study. There is evidence of the auditory perception influence on the development of oral and written language, as well as on the self-perception of vocal conditions. The auditory system maturation can impact on this process. OBJECTIVE: To characterize the auditory skills of temporal ordering and localization in dysphonic children. MATERIALS AND METHODS: We assessed 42 children (4 to 8 years. Study group: 31 dysphonic children; Comparison group: 11 children without vocal change complaints. They all had normal auditory thresholds and also normal cochleo-eyelid reflexes. They were submitted to a Simplified assessment of the auditory process (Pereira, 1993. In order to compare the groups, we used the Mann-Whitney and Kruskal-Wallis statistical tests. Level of significance: 0.05 (5%. RESULTS: Upon simplified assessment, 100% of the Control Group and 61.29% of the Study Group had normal results. The groups were similar in the localization and verbal sequential memory tests. The nonverbal sequential memory showed worse results on dysphonic children. In this group, the performance was worse among the four to six years. CONCLUSION: The dysphonic children showed changes on the localization or temporal ordering skills, the skill of non-verbal temporal ordering differentiated the dysphonic group. In this group, the Sound Location improved with age.Estudo de coorte contemporânea com corte transversal. Há evidências da influência da percepção auditiva sobre o desenvolvimento da linguagem oral e escrita e da autopercepção das condições vocais. A maturação do sistema auditivo pode interferir nesse processo. OBJETIVO: Caracterizar habilidades auditivas de Localização e de Ordenação Temporal em crianças disfônicas. MATERIAL E MÉTODO: Avaliaram-se 42 crianças (4 a 8 anos. Grupo Pesquisa: 31 crianças disfônicas, Grupo de Comparação: 11 crianças sem queixas de alterações vocais. Todas apresentaram

  16. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  17. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    Science.gov (United States)

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  18. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  20. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  1. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  2. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  3. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  4. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  5. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  6. Early continuous white noise exposure alters auditory spatial sensitivity and expression of GAD65 and GABAA receptor subunits in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2010-04-01

    Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.

  7. Quantification of dendritic and axonal growth after injury to the auditory system of the adult cricket Gryllus bimaculatus

    Directory of Open Access Journals (Sweden)

    Alexandra ePfister

    2013-08-01

    Full Text Available Dendrite and axon growth and branching during development are regulated by a complex set of intracellular and external signals. However, the cues that maintain or influence adult neuronal morphology are less well understood. Injury and deafferentation tend to have negative effects on adult nervous systems. An interesting example of injury-induced compensatory growth is seen in the cricket, Gryllus bimaculatus. After unilateral loss of an ear in the adult cricket, auditory neurons within the central nervous system sprout to compensate for the injury. Specifically, after being deafferented, ascending neurons (AN-1 and AN-2 send dendrites across the midline of the prothoracic ganglion where they receive input from auditory afferents that project through the contralateral auditory nerve (N5. Deafferentation also triggers contralateral N5 axonal growth. In this study, we quantified AN dendritic and N5 axonal growth at 30 hours, as well as at 3, 5, 7, 14 and 20 days after deafferentation in adult crickets. Significant differences in the rates of dendritic growth between males and females were noted. In females, dendritic growth rates were non-linear; a rapid burst of dendritic extension in the first few days was followed by a plateau reached at 3 days after deafferentation. In males, however, dendritic growth rates were linear, with dendrites growing steadily over time and reaching lengths, on average, twice as long as in females. On the other hand, rates of N5 axonal growth showed no significant sexual dimorphism and were linear. Within each animal, the growth rates of dendrites and axons were not correlated, indicating that independent factors likely influence dendritic and axonal growth in response to injury in this system. Our findings provide a basis for future study of the cellular features that allow differing dendrite and axon growth patterns as well as sexually dimorphic dendritic growth in response to deafferentation.

  8. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder

    Directory of Open Access Journals (Sweden)

    Yones Lotfi

    2016-03-01

    Full Text Available Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD. Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth and lower negative correlations in the most lateral reference location (60° azimuth in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  9. The Effect of Noise on the Relationship Between Auditory Working Memory and Comprehension in School-Age Children.

    Science.gov (United States)

    Sullivan, Jessica R; Osman, Homira; Schafer, Erin C

    2015-06-01

    The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.

  10. Auditory analysis for speech recognition based on physiological models

    Science.gov (United States)

    Jeon, Woojay; Juang, Biing-Hwang

    2004-05-01

    To address the limitations of traditional cepstrum or LPC based front-end processing methods for automatic speech recognition, more elaborate methods based on physiological models of the human auditory system may be used to achieve more robust speech recognition in adverse environments. For this purpose, a modified version of a model of the primary auditory cortex featuring a three dimensional mapping of auditory spectra [Wang and Shamma, IEEE Trans. Speech Audio Process. 3, 382-395 (1995)] is adopted and investigated for its use as an improved front-end processing method. The study is conducted in two ways: first, by relating the model's redundant representation to traditional spectral representations and showing that the former not only encompasses information provided by the latter, but also reveals more relevant information that makes it superior in describing the identifying features of speech signals; and second, by observing the statistical features of the representation for various classes of sound to show how different identifying features manifest themselves as specific patterns on the cortical map, thereby becoming a place-coded data set on which detection theory could be applied to simulate auditory perception and cognition.

  11. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  12. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Auditory sensory ("echoic") memory dysfunction in schizophrenia.

    Science.gov (United States)

    Strous, R D; Cowan, N; Ritter, W; Javitt, D C

    1995-10-01

    Studies of working memory dysfunction in schizophrenia have focused largely on prefrontal components. This study investigated the integrity of auditory sensory ("echoic") memory, a component that shows little dependence on prefrontal functioning. Echoic memory was investigated in 20 schizophrenic subjects and 20 age- and IQ-matched normal comparison subjects with the use of nondelayed and delayed tone matching. Schizophrenic subjects were markedly impaired in their ability to match two tones after an extremely brief delay between them (300 msec) but were unimpaired when there was no delay between tones. Working memory dysfunction in schizophrenia affects brain regions outside the prefrontal cortex as well as within.

  14. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014

  15. Auditory-Motor Interactions in Pediatric Motor Speech Disorders: Neurocomputational Modeling of Disordered Development

    Science.gov (United States)

    Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.

    2014-01-01

    Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630

  16. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  17. Topological resilience in non-normal networked systems

    Science.gov (United States)

    Asllani, Malbor; Carletti, Timoteo

    2018-04-01

    The network of interactions in complex systems strongly influences their resilience and the system capability to resist external perturbations or structural damages and to promptly recover thereafter. The phenomenon manifests itself in different domains, e.g., parasitic species invasion in ecosystems or cascade failures in human-made networks. Understanding the topological features of the networks that affect the resilience phenomenon remains a challenging goal for the design of robust complex systems. We hereby introduce the concept of non-normal networks, namely networks whose adjacency matrices are non-normal, propose a generating model, and show that such a feature can drastically change the global dynamics through an amplification of the system response to exogenous disturbances and eventually impact the system resilience. This early stage transient period can induce the formation of inhomogeneous patterns, even in systems involving a single diffusing agent, providing thus a new kind of dynamical instability complementary to the Turing one. We provide, first, an illustrative application of this result to ecology by proposing a mechanism to mute the Allee effect and, second, we propose a model of virus spreading in a population of commuters moving using a non-normal transport network, the London Tube.

  18. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    Science.gov (United States)

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  20. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    Science.gov (United States)

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.