White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina
Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response
Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Watrous, Betty Springer; And Others
Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)
Källstrand, Johan; Nehlstedt, Sara Fristedt; Sköld, Mia Ling; Nielzén, Sören
Individuals diagnosed with schizophrenia show deficiencies of basic neurophysiological sorting mechanisms. This study further investigated this issue, focusing on the two phenomena, laterality of coding and auditory forward masking. A specific audiometric method for use in psychiatry was the measuring set up to register brain stem audiograms (ABRs). A sample of 49 schizophrenic patients was compared with three control groups consisting of healthy reference subjects (n=49), attention deficit hyperactivity disorder (ADHD) patients (n=29), Asperger syndrome (AS) patients (n=13) and drug-induced psychotic patients (n=14). Schizophrenic patients showed significant abnormal laterality of brainstem activity in wave II of the auditory brainstem response (ABR) in comparison with all other study groups. Forward masking effects in the superior olive complex were coded significantly differently by schizophrenic patients compared to control groups except for the AS group. The results suggest deficits in the coding of auditory stimuli in the lower parts of the auditory pathway in schizophrenia and indicate that increased peripheral lateral asymmetry and forward masking aberrances could be neurophysiological markers for the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lau, S.K.; Wei, W.I.; Sham, J.S.T.; Choy, D.T.K.; Hui, Y. (Queen Mary Hospital, Hong Kong (Hong Kong))
A prospective study of the effect of radiotherapy for nasopharyngeal carcinoma on hearing was carried out on 49 patients who had pure tone, impedance audiometry and auditory brain stem evoked response (ABR) recordings before, immediately, three, six and 12 months after radiotherapy. Fourteen patients complained of intermittent tinnitus after radiotherapy. We found that 11 initially normal ears of nine patients developed a middle ear effusion, three to six months after radiotherapy. There was mixed sensorineural and conductive hearing impairment after radiotherapy. Persistent impairment of ABR was detected immediately after completion of radiotherapy. The waves I-III and I-V interpeak latency intervals were significantly prolonged one year after radiotherapy. The study shows that radiotherapy for nasopharyngeal carcinoma impairs hearing by acting on the middle ear, the cochlea and the brain stem auditory pathway. (Author).
Ali Akbar Tahaei
Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.
Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco
Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.
Wirtssohn, Sarah; Ronacher, Bernhard
Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.
Weaver, Kurt E; Stevens, Alexander A
For blind individuals, audition provides critical information for interacting with the environment. Individuals blinded early in life (EB) typically show enhanced auditory abilities relative to sighted controls as measured by tasks requiring complex discrimination, attention and memory. In contrast, few deficits have been reported on tasks involving auditory sensory thresholds (e.g., Yates, J.T., Johnson, R.M., Starz, W.J., 1972. Loudness perception of the blind. Audiology 11(5), 368-376; Starlinger, I., Niemeyer, W., 1981. Do the blind hear better? Investigations on auditory processing in congenital or early acquired blindness. I. Peripheral functions. Audiology 20(6), 503-509). A study of gap detection stands at odds with this distinction [Muchnik, C., Efrati, M., Nemeth, E., Malin, M., Hildesheimer, M., 1991. Central auditory skills in blind and sighted subjects. Scand. Audiol. 20(1), 19-23]. In the current investigation we re-examined gap detection abilities in the EB using a single-interval, yes/no method. A group of younger sighted control individuals (SCy) was included in the analysis in addition to EB and sighted age matched control individuals (SCm) in order to examine the effect of age on gap detection performance. Estimates of gap detection thresholds for EB subjects were nearly identical to SCm subjects and slightly poorer relative to the SCy subjects. These results suggest some limits on the extent of auditory temporal advantages in the EB.
Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin
Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices.
Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.
Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.
Kwapien, J; Liu, L C; Ioannides, A A
Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.
Large, Edward W; Almonte, Felix V
Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.
Aldonate, J; Mercuri, C; Reta, J; Biurrun, J; Bonell, C; Gentiletti, G; Escobar, S; Acevedo, R [Laboratorio de Ingenieria en Rehabilitacion e Investigaciones Neuromusculares y Sensoriales (Argentina); Facultad de Ingenieria, Universidad Nacional de Entre Rios, Ruta 11 - Km 10, Oro Verde, Entre Rios (Argentina)
Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.
Aldonate, J.; Mercuri, C.; Reta, J.; Biurrun, J.; Bonell, C.; Gentiletti, G.; Escobar, S.; Acevedo, R.
Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.
Carlos A. M. Guerreiro
Full Text Available The tecnique that we use for eliciting brainstem auditory evoked responses (BAERs is described. BAERs are a non-invasive and reliable clinical test when carefully performed. This test is indicated in the evaluation of disorders which may potentially involve the brainstem such as coma, multiple sclerosis posterior fossa tumors and others. Unsuspected lesions with normal radiologic studies (including CT-scan can be revealed by the BAER.
Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro
[Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.
Karns, Christina M; Knight, Robert T
We used event-related potentials (ERPs) and gamma band oscillatory responses (GBRs) to examine whether intermodal attention operates early in the auditory, visual, and tactile modalities. To control for the effects of spatial attention, we spatially coregistered all stimuli and varied the attended modality across counterbalanced blocks in an intermodal selection task. In each block, participants selectively responded to either auditory, visual, or vibrotactile stimuli from the stream of intermodal events. Auditory and visual ERPs were modulated at the latencies of early cortical processing, but attention manifested later for tactile ERPs. For ERPs, auditory processing was modulated at the latency of the Na (29 msec), which indexes early cortical or thalamocortical processing and the subsequent P1 (90 msec) ERP components. Visual processing was modulated at the latency of the early phase of the C1 (62-72 msec) thought to be generated in the primary visual cortex and the subsequent P1 and N1 (176 msec). Tactile processing was modulated at the latency of the N160 (165 msec) likely generated in the secondary association cortex. Intermodal attention enhanced early sensory GBRs for all three modalities: auditory (onset 57 msec), visual (onset 47 msec), and tactile (onset 27 msec). Together, these results suggest that intermodal attention enhances neural processing relatively early in the sensory stream independent from differential effects of spatial and intramodal selective attention.
M. Alex Meredith
Full Text Available Numerous investigations of cortical crossmodal plasticity, most often in congenital or early-deaf subjects, have indicated that secondary auditory cortical areas reorganize to exhibit visual responsiveness while the core auditory regions are largely spared. However, a recent study of adult-deafened ferrets demonstrated that core auditory cortex was reorganized by the somatosensory modality. Because adult animals have matured beyond their critical period of sensory development and plasticity, it was not known if adult-deafening and early-deafening would generate the same crossmodal results. The present study used young, ototoxically-lesioned ferrets (n=3 that, after maturation (avg. = 173 days old, showed significant hearing deficits (avg. threshold = 72 dB SPL. Recordings from single-units (n=132 in core auditory cortex showed that 72% were activated by somatosensory stimulation (compared to 1% in hearing controls. In addition, tracer injection into early hearing-impaired core auditory cortex labeled essentially the same auditory cortical and thalamic projection sources as seen for injections in the hearing controls, indicating that the functional reorganization was not the result of new or latent projections to the cortex. These data, along with similar observations from adult-deafened and adult hearing-impaired animals, support the recently proposed brainstem theory for crossmodal plasticity induced by hearing loss.
Full Text Available To compare the development of the auditory system in hearing and completely acoustically deprived animals, naive congenitally deaf white cats (CDCs and hearing controls (HCs were investigated at different developmental stages from birth till adulthood. The CDCs had no hearing experience before the acute experiment. In both groups of animals, responses to cochlear implant stimulation were acutely assessed. Electrically evoked auditory brainstem responses (E-ABRs were recorded with monopolar stimulation at different current levels. CDCs demonstrated extensive development of E-ABRs, from first signs of responses at postnatal (p.n. day 3 through appearance of all waves of brainstem response at day 8 p.n. to mature responses around day 90 p.n.. Wave I of E-ABRs could not be distinguished from the artifact in majority of CDCs, whereas in HCs, it was clearly separated from the stimulus artifact. Waves II, III, and IV demonstrated higher thresholds in CDCs, whereas this difference was not found for wave V. Amplitudes of wave III were significantly higher in HCs, whereas wave V amplitudes were significantly higher in CDCs. No differences in latencies were observed between the animal groups. These data demonstrate significant postnatal subcortical development in absence of hearing, and also divergent effects of deafness on early waves II–IV and wave V of the E-ABR.
Full Text Available BACKGROUND: The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. METHODOLOGY/PRINCIPAL FINDINGS: Cochlear microphonics (CM, auditory-nerve compound action potentials (CAP and auditory cortex evoked potentials (ACEP were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments and a permanent reduction in five chinchillas (lesion experiments. We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. CONCLUSIONS/SIGNIFICANCE: These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the
Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.
The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...
赵赋; 武丽; 王博; 杨智君; 王振民; 王兴朝; 李朋; 张晶; 刘丕楠
目的 探讨采用听性脑干反应和纯音听阈对早期诊断听神经瘤的临床应用价值.方法 回顾性分析了111例听神经瘤患者的临床资料、纯音听阈、听性脑干反应及增强磁共振结果,采用线性回归分析纯音听阈均值与肿瘤体积、病程是否存在相关性,采用卡方检验分析不同肿瘤体积在听性脑干反应异常发生率上是否存在差异.结果 听神经瘤引起感音神经性耳聋,纯音听阈均值与病程存在显著地相关性(P=0.000);听性脑干反应诊断听神经瘤的敏感度和特异度分别为98.2％和93.6％,肿瘤最大径＞3 cm与≤3 cm两组,在患侧和对侧Ⅲ～Ⅳ波间期异常发生率上,差异均具有统计学意义(P值分别为0.038和0.045).结论 听性脑干反应联合纯音测听是早期诊断听神经瘤的有效方法.%Objective To investigate the clinical application value of using auditory brainstem response and pure tone audiometry for early diagnosis of acoustic neuroma.Methods The clinical data,the results of pure tone audiometry,auditory brainstem response,and enhanced MRI in 111 patients with acoustic neuroma were analyzed retrospectively.Linear regression analysis was used to analyze the correlation between the nean value of pure tone audiometry and the neuroma volune or course of disease.Chi-squared test was used to analyze the whether there were differences in the different neuroma volumes on the incidence of abnormal auditory brainstem response.Results Acoustic neuroma caused sensorineural deafness.There was a significant correlation between the mean value of pure tone audiometry and the course of disease (P =0.000).The sensitivity and specificity of auditory brainstem response for the diagnosis of acoustic neuroma were 98.2％ and 93.6％ respectively.The maximum diameters of neuromas were divided into 2 groups:＞ 3 cm or ≤3 cm.There were significant differences on the abnormal incidence of the Ⅲ to Ⅴ wave intervals of the
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can
Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.
Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine
Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.
Full Text Available Background and Aim: Physiologic measures of cochlear and auditory nerve function may be of assis¬tance in distinguishing between hearing disorders due primarily to auditory nerve impairment from those due primarily to cochlear hair cells dysfunction. The goal of present study was to measure of co-chlear responses (otoacoustic emissions and cochlear microphonics and auditory brainstem response in some adults with auditory neuropathy/ dys-synchrony and subjects with normal hearing. Materials and Methods: Patients were 16 adults (32 ears in age range of 14-30 years with auditory neu¬ropathy/ dys-synchrony and 16 individuals in age range of 16-30 years from both sexes. The results of transient otoacoustic emissions, cochlear microphonics and auditory brainstem response measures were compared in both groups and the effects of age, sex, ear and degree of hearing loss were studied. Results: The pure-tone average was 48.1 dB HL in auditory neuropathy/dys-synchrony group and the fre¬quency of low tone loss and flat audiograms were higher among other audiogram's shapes. Transient oto¬acoustic emissions were shown in all auditory neuropathy/dys-synchrony people except two cases and its average was near in both studied groups. The latency and amplitude of the biggest reversed co-chlear microphonics response were higher in auditory neuropathy/dys-synchrony patients than control peo¬ple significantly. The correlation between cochlear microphonics amplitude and degree of hearing loss was not significant, and age had significant effect in some cochlear microphonics measures. Audi-tory brainstem response had no response in auditory neuropathy/dys-synchrony patients even with low stim¬uli rates. Conclusion: In adults with speech understanding worsen than predicted from the degree of hearing loss that suspect to auditory neuropathy/ dys-synchrony, the frequency of low tone loss and flat audiograms are higher. Usually auditory brainstem response is absent in
Naue, Nicole; Rach, Stefan; Strüber, Daniel; Huster, Rene J; Zaehle, Tino; Körner, Ursula; Herrmann, Christoph S
Growing evidence from electrophysiological data in animal and human studies suggests that multisensory interaction is not exclusively a higher-order process, but also takes place in primary sensory cortices. Such early multisensory interaction is thought to be mediated by means of phase resetting. The presentation of a stimulus to one sensory modality resets the phase of ongoing oscillations in another modality such that processing in the latter modality is modulated. In humans, evidence for such a mechanism is still sparse. In the current study, the influence of an auditory stimulus on visual processing was investigated by measuring the electroencephalogram (EEG) and behavioral responses of humans to visual, auditory, and audiovisual stimulation with varying stimulus-onset asynchrony (SOA). We observed three distinct oscillatory EEG responses in our data. An initial gamma-band response around 50 Hz was followed by a beta-band response around 25 Hz, and a theta response around 6 Hz. The latter was enhanced in response to cross-modal stimuli as compared to either unimodal stimuli. Interestingly, the beta response to unimodal auditory stimuli was dominant in electrodes over visual areas. The SOA between auditory and visual stimuli--albeit not consciously perceived--had a modulatory impact on the multisensory evoked beta-band responses; i.e., the amplitude depended on SOA in a sinusoidal fashion, suggesting a phase reset. These findings further support the notion that parameters of brain oscillations such as amplitude and phase are essential predictors of subsequent brain responses and might be one of the mechanisms underlying multisensory integration.
Papesh, Melissa A; Hurley, Laura M
The neuromodulator serotonin is found throughout the auditory system from the cochlea to the cortex. Although effects of serotonin have been reported at the level of single neurons in many brainstem nuclei, how these effects correspond to more integrated measures of auditory processing has not been well-explored. In the present study, we aimed to characterize the effects of serotonin on far-field auditory brainstem responses (ABR) across a wide range of stimulus frequencies and intensities. Using a mouse model, we investigated the consequences of systemic serotonin depletion, as well as the selective stimulation and suppression of the 5-HT1 and 5-HT2 receptors, on ABR latency and amplitude. Stimuli included tone pips spanning four octaves presented over a forty dB range. Depletion of serotonin reduced the ABR latencies in Wave II and later waves, suggesting that serotonergic effects occur as early as the cochlear nucleus. Further, agonists and antagonists of specific serotonergic receptors had different profiles of effects on ABR latencies and amplitudes across waves and frequencies, suggestive of distinct effects of these agents on auditory processing. Finally, most serotonergic effects were more pronounced at lower ABR frequencies, suggesting larger or more directional modulation of low-frequency processing. This is the first study to describe the effects of serotonin on ABR responses across a wide range of stimulus frequencies and amplitudes, and it presents an important step in understanding how serotonergic modulation of auditory brainstem processing may contribute to modulation of auditory perception.
Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.
Schochat, E; Musiek, F E; Alonso, R; Ogata, J
The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.
Maor, Ido; Shalev, Amos; Mizrahi, Adi
In the auditory system, early neural stations such as brain stem are characterized by strict tonotopy, which is used to deconstruct sounds to their basic frequencies. But higher along the auditory hierarchy, as early as primary auditory cortex (A1), tonotopy starts breaking down at local circuits. Here, we studied the response properties of both excitatory and inhibitory neurons in the auditory cortex of anesthetized mice. We used in vivo two photon-targeted cell-attached recordings from identified parvalbumin-positive neurons (PVNs) and their excitatory pyramidal neighbors (PyrNs). We show that PyrNs are locally heterogeneous as characterized by diverse best frequencies, pairwise signal correlations, and response timing. In marked contrast, neighboring PVNs exhibited homogenous response properties in pairwise signal correlations and temporal responses. The distinct physiological microarchitecture of different cell types is maintained qualitatively in response to natural sounds. Excitatory heterogeneity and inhibitory homogeneity within the same circuit suggest different roles for each population in coding natural stimuli. PMID:27600839
Siegelaar, S. E.; Olff, M.; Bour, L. J.; Veelo, D.; Zwinderman, A. H.; van Bruggen, G.; de Vries, G. J.; Raabe, S.; Cupido, C.; Koelman, J. H. T. M.; Tijssen, M. A. J.
Post-traumatic stress disorder (PTSD) patients are considered to have excessive EMG responses in the orbicularis oculi (OO) muscle and excessive autonomic responses to startling stimuli. The aim of the present study was to gain more insight into the pattern of the generalized auditory startle reflex
Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G
Cross-modal plasticity following peripheral sensory loss enables deprived cortex to provide enhanced abilities in remaining sensory systems. These functional adaptations have been demonstrated in cat auditory cortex following early-onset deafness in electrophysiological and psychophysical studies. However, little information is available concerning any accompanying structural compensations. To examine the influence of sound experience on areal cartography, auditory cytoarchitecture was examined in hearing cats, early-deaf cats, and cats with late-onset deafness. Cats were deafened shortly after hearing onset or in adulthood. Cerebral cytoarchitecture was revealed immunohistochemically using SMI-32, a monoclonal antibody used to distinguish auditory areas in many species. Auditory areas were delineated in coronal sections and their volumes measured. Staining profiles observed in hearing cats were conserved in early- and late-deaf cats. In all deaf cats, dorsal auditory areas were the most mutable. Early-deaf cats showed further modifications, with significant expansions in second auditory cortex and ventral auditory field. Borders between dorsal auditory areas and adjacent visual and somatosensory areas were shifted ventrally, suggesting expanded visual and somatosensory cortical representation. Overall, this study shows the influence of acoustic experience in cortical development, and suggests that the age of auditory deprivation may significantly affect auditory areal cartography. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.
Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R; Shao, Jie; Lozoff, Betsy
Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with both Auditory Brainstem Response (ABR) and language assessments. At 6 weeks and/or 9 months of age, the infants underwent ABR testing using both a standard hearing screening protocol with 30 dB clicks and a second protocol using click pairs separated by 8, 16, and 64-ms intervals presented at 80 dB. We evaluated the effects of interval duration on ABR latency and amplitude elicited by the second click. At 9 months, language development was assessed via parent report on the Chinese Communicative Development Inventory - Putonghua version (CCDI-P). Wave V latency z-scores of the 64-ms condition at 6 weeks showed strong direct relationships with Wave V latency in the same condition at 9 months. More importantly, shorter Wave V latencies at 9 months showed strong relationships with the CCDI-P composite consisting of phrases understood, gestures, and words produced. Likewise, infants who had greater decreases in Wave V latencies from 6 weeks to 9 months had higher CCDI-P composite scores. Females had higher language development scores and shorter Wave V latencies at both ages than males. Interestingly, when the ABR Wave V latencies at both ages were taken into account, the direct effects of gender on language disappeared. In conclusion, these results support the importance of low-level auditory processing capabilities for early language acquisition in a population of typically developing young infants. Moreover, the auditory brainstem response in this paradigm shows promise as an electrophysiological marker to predict individual differences in language development in young children. © 2012 Blackwell Publishing Ltd.
Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.
Venail, Frederic; Mura, Thibault; Akkari, Mohamed; Mathiolon, Caroline; Menjot de Champfleur, Sophie; Piron, Jean Pierre; Sicard, Marielle; Sterkers-Artieres, Françoise; Mondain, Michel; Uziel, Alain
The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement), electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device). The electrical response, measured using auto-NRT (neural responses telemetry) algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = -0.11 ± 0.02, P < 0.01), the scalar placement of the electrodes (β = -8.50 ± 1.97, P < 0.01), and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF). Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.
Krishnamurti, Sridhar; Forrester, Jennifer; Rutledge, Casey; Holmes, Georgia W
Studies related to plasticity and learning-related phenomena have primarily focused on higher-order processes of the auditory system, such as those in the auditory cortex and limited information is available on learning- and plasticity-related processes in the auditory brainstem. A clinical electrophysiological test of speech-evoked ABR known as BioMARK has been developed to evaluate brainstem responses to speech sounds in children with language learning disorders. Fast ForWord (FFW) was used as an auditory intervention program in the current study and pre- intervention and post-intervention speech-evoked ABR (BioMARK) measures were compared in 2 school-aged children with auditory processing disorders (APD). Significant changes were noted from pre-intervention to post-intervention and reflect plasticity in the auditory brainstem's neural activity to speech stimuli. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Todorovic, Ana; de Lange, Floris P
Repetition of a stimulus, as well as valid expectation that a stimulus will occur, both attenuate the neural response to it. These effects, repetition suppression and expectation suppression, are typically confounded in paradigms in which the nonrepeated stimulus is also relatively rare (e.g., in oddball blocks of mismatch negativity paradigms, or in repetition suppression paradigms with multiple repetitions before an alternation). However, recent hierarchical models of sensory processing inspire the hypothesis that the two might be separable in time, with repetition suppression occurring earlier, as a consequence of local transition probabilities, and suppression by expectation occurring later, as a consequence of learnt statistical regularities. Here we test this hypothesis in an auditory experiment by orthogonally manipulating stimulus repetition and stimulus expectation and, using magnetoencephalography, measuring the neural response over time in human subjects. We found that stimulus repetition (but not stimulus expectation) attenuates the early auditory response (40-60 ms), while stimulus expectation (but not stimulus repetition) attenuates the subsequent, intermediate stage of auditory processing (100-200 ms). These findings are well in line with hierarchical predictive coding models, which posit sequential stages of prediction error resolution, contingent on the level at which the hypothesis is generated.
Brittan-Powell, Elizabeth F; Christensen-Dalsgaard, Jakob; Tang, Yezhong
Although lizards have highly sensitive ears, it is difficult to condition them to sound, making standard psychophysical assays of hearing sensitivity impractical. This paper describes non-invasive measurements of the auditory brainstem response (ABR) in both Tokay geckos (Gekko gecko; nocturnal...... in most bird species....
Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.
Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610
Parving, A; Salomon, G; Elberling, Claus
An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...
Encina Llamas, Gerard; M. Harte, James; Epp, Bastian
cause auditory nerve fiber (ANF) deafferentation in predominantly low-spontaneous rate (SR) fibers. In the present study, auditory steadystate response (ASSR) level growth functions were measured to evaluate the applicability of ASSR to assess compression and the ability to code intensity fluctuations...... at high stimulus levels. Level growth functions were measured in normal-hearing adults at stimulus levels ranging from 20 to 90 dB SPL. To evaluate compression, ASSR were measured for multiple carrier frequencies simultaneously. To evaluate intensity coding at high intensities, ASSR were measured using....... The results indicate that the slope of the ASSR level growth function can be used to estimate peripheral compression simultaneously at four frequencies below 60 dB SPL, while the slope above 60 dB SPL may provide information about the integrity of intensity coding of low-SR fibers....
Lehmann, Alexandre; Schönwiesner, Marc
Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Jonathan J. Smith
Full Text Available Dishabituation is a return of a habituated response if context or contingency changes. In the mammalian olfactory system, metabotropic glutamate receptor mediated synaptic depression of cortical afferents underlies short-term habituation to odors. It was hypothesized that a known antagonistic interaction between these receptors and norepinephrine ß-receptors provides a mechanism for dishabituation. The results demonstrate that a 108 dB siren induces a two-fold increase in norepinephrine content in the piriform cortex. The same auditory stimulus induces dishabituation of odor-evoked heart rate orienting bradycardia responses in awake rats. Finally, blockade of piriform cortical norepinephrine ß-receptors with bilateral intracortical infusions of propranolol (100 μM disrupts auditory-induced dishabituation of odor-evoked bradycardia responses. These results provide a cortical mechanism for a return of habituated sensory responses following a cross-modal alerting stimulus.
Depireux, D A; Shamma, S A; Depireux, Didier A.; Simon, Jonathan Z.; Shamma, Shihab A.
We review recent developments in the measurement of the dynamics of the response properties of auditory cortical neurons to broadband sounds, which is closely related to the perception of timbre. The emphasis is on a method that characterizes the spectro-temporal properties of single neurons to dynamic, broadband sounds, akin to the drifting gratings used in vision. The method treats the spectral and temporal aspects of the response on an equal footing.
Tierney, Adam T; Bergeson-Dana, Tonya R; Pisoni, David B
The present study investigated a possible link between musical training and immediate memory span by testing experienced musicians and three groups of musically inexperienced subjects (gymnasts, Psychology 101 students, and video game players) on sequence memory and word familiarity tasks. By including skilled gymnasts who began studying their craft by age six, video game players, and Psychology 101 students as comparison groups, we attempted to control for some of the ways skilled musicians may differ from participants drawn from the general population in terms of gross motor skills and intensive experience in a highly skilled domain from an early age. We found that musicians displayed longer immediate memory spans than the comparison groups on auditory presentation conditions of the sequence reproductive span task. No differences were observed between the four groups on the visual conditions of the sequence memory task. These results provide additional converging support to recent findings showing that early musical experience and activity-dependent learning may selectively affect verbal rehearsal processes and the allocation of attention in sequence memory tasks.
Adam T. Tierney
Full Text Available The present study investigated a possible link between musical training and immediate memory span by testing experienced musicians and three groups of musically inexperienced subjects (gymnasts, Psychology 101 students, and video game players on sequence memory and word familiarity tasks. By including skilled gymnasts who began studying their craft by age six, video game players, and Psychology 101 students as comparison groups, we attempted to control for some of the ways skilled musicians may differ from participants drawn from the general population in terms of gross motor skills and intensive experience in a highly skilled domain from an early age. We found that musicians displayed longer immediate memory spans than the comparison groups on auditory presentation conditions of the sequence reproductive span task. No differences were observed between the four groups on the visual conditions of the sequence memory task. These results provide additional converging support to recent findings showing that early musical experience and activity-dependent learning may selectively affect verbal rehearsal processes and the allocation of attention in sequence memory tasks.
Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T
Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.
Chung, Yoojin; Delgutte, Bertrand; Colburn, H Steven
Bilateral cochlear implants (CIs) provide improvements in sound localization and speech perception in noise over unilateral CIs. However, the benefits arise mainly from the perception of interaural level differences, while bilateral CI listeners' sensitivity to interaural time difference (ITD) is poorer than normal. To help understand this limitation, a set of ITD-sensitive neural models was developed to study binaural responses to electric stimulation. Our working hypothesis was that central auditory processing is normal with bilateral CIs so that the abnormality in the response to electric stimulation at the level of the auditory nerve fibers (ANFs) is the source of the limited ITD sensitivity. A descriptive model of ANF response to both acoustic and electric stimulation was implemented and used to drive a simplified biophysical model of neurons in the medial superior olive (MSO). The model's ITD sensitivity was found to depend strongly on the specific configurations of membrane and synaptic parameters for different stimulation rates. Specifically, stronger excitatory synaptic inputs and faster membrane responses were required for the model neurons to be ITD-sensitive at high stimulation rates, whereas weaker excitatory synaptic input and slower membrane responses were necessary at low stimulation rates, for both electric and acoustic stimulation. This finding raises the possibility of frequency-dependent differences in neural mechanisms of binaural processing; limitations in ITD sensitivity with bilateral CIs may be due to a mismatch between stimulation rate and cell parameters in ITD-sensitive neurons.
Gurdev Lal Goyal
Full Text Available Background: Auditory brainstem response is an objective electrophysiological method for assessing the auditory pathways from the auditory nerve to the brainstem. The aim of this study was to correlate and to assess the degree of involvement of peripheral and central regions of brainstem auditory pathways with increasing severity of hypertension, among the patients of essential hypertension. Method: This study was conducted on 50 healthy age and sex matched controls (Group I and 50 hypertensive patients (Group II. Later group was further sub-divided into - Group IIa (Grade 1 hypertension, Group IIb (Grade 2 hypertension, and Group IIc (Grade 3 hypertension, as per WHO guidelines. These responses/potentials were recorded by using electroencephalogram electrodes on a root-mean-square electromyography, EP MARC II (PC-based machine and data were statistically compared between the various groups by way of one-way ANOVA. The parameters used for analysis were the absolute latencies of Waves I through V, interpeak latencies (IPLs and amplitude ratio of Wave V/I. Result: The absolute latency of Wave I was observed to be significantly increased in Group IIa and IIb hypertensives, while Wave V absolute latency was highly significantly prolonged among Group IIb and IIc, as compared to that of normal control group. All the hypertensives, that is, Group IIa, IIb, and IIc patients were found to have highly significant prolonged III-V IPL as compared to that of normal healthy controls. Further, intergroup comparison among hypertensive patients revealed a significant prolongation of Wave V absolute latency and III-V IPL in Group IIb and IIc patients as compared to Group IIa patients. These findings suggest a sensory deficit along with synaptic delays, across the auditory pathways in all the hypertensives, the deficit being more markedly affecting the auditory processing time at pons to midbrain (IPL III-V region of auditory pathways among Grade 2 and 3
Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W
The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.
Afra, Pegah; Funke, Michael; Matsuo, Fumisuke
Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms. PMID:22110319
Malinowski, T.; Klepacki, J.; Wagstyl, R.
The evoked response audiometry method of testing hearing loss is presented and the results of comparative studies using subjective tonal audiometry and evoked response audiometry in tests of 56 healthy men with good hearing are discussed. The men were divided into three groups according to age and place of work: work place without increased noise; work place with noise and vibrations (at drilling machines); work place with noise and shocks (work at excavators in surface coal mines). The ERA-MKII audiometer produced by the Medelec-Amplaid firm was used. Audiometric threshhold curves for the three groups of tested men are given. At frequencies of 500, 1000 and 4000 Hz mean objective auditory threshhold was shifted by 4-9.5 dB in comparison to the subjective auditory threshold. (21 refs.) (In Polish)
Rønne, Filip Munch; Dau, Torsten; Harte, James
A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory...... of tone-pulse evoked wave-V latency with frequency but underestimates the level dependency of the tone-pulse as well as click-evoked latency values. Furthermore, the model correctly predicts the nonlinear wave-V amplitude behavior in response to the chirp stimulation both as a function of chirp sweeping...... rate and level. Overall, the results support the hypothesis that the pattern of ABR generation is strongly affected by the nonlinear and dispersive processes in the cochlea....
Siegelaar, S E; Olff, M; Bour, L J; Veelo, D; Zwinderman, A H; van Bruggen, G; de Vries, G J; Raabe, S; Cupido, C; Koelman, J H T M; Tijssen, M A J
Post-traumatic stress disorder (PTSD) patients are considered to have excessive EMG responses in the orbicularis oculi (OO) muscle and excessive autonomic responses to startling stimuli. The aim of the present study was to gain more insight into the pattern of the generalized auditory startle reflex (ASR). Reflex EMG responses to auditory startling stimuli in seven muscles rather than the EMG response of the OO alone as well as the psychogalvanic reflex (PGR) were studied in PTSD patients and healthy controls. Ten subjects with chronic PTSD (>3 months) and a history of excessive startling and 11 healthy controls were included. Latency, amplitude and duration of the EMG responses and the amplitude of the PGR to 10 auditory stimuli of 110 dB SPL were investigated in seven left-sided muscles. The size of the startle reflex, defined by the number of muscles activated by the acoustic stimulus and by the amplitude of the EMG response of the OO muscle as well, did not differ significantly between patients and controls. Median latencies of activity in the sternocleidomastoid (SC) (patients 80 ms; controls 54 ms) and the deltoid (DE) muscles (patients 113 ms; controls 69 ms) were prolonged significantly in PTSD compared to controls (P < 0.05). In the OO muscle, a late response (median latency in patients 308 ms; in controls 522 ms), probably the orienting reflex, was more frequently present in patients (56%) than in controls (12%). In patients, the mean PGR was enlarged compared to controls (P < 0.05). The size of the ASR response is not enlarged in PTSD patients. EMG latencies in the PTSD patients are prolonged in SC and DE muscles. The presence of a late response in the OO muscle discriminates between groups of PTSD patients with a history of startling and healthy controls. In addition, the autonomic response, i.e. the enlarged amplitude of the PGR can discriminate between these groups.
Holger F Sperdin
Full Text Available Several lines of research have documented early-latency non-linear response interactions between audition and touch in humans and non-human primates. That these effects have been obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would suggest that some multisensory effects are not directly influencing behavioral outcome. We investigated whether the initial non-linear neural response interactions have a direct bearing on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related potentials (ERPs in response to auditory, somatosensory, or simultaneous auditory-somatosensory multisensory stimulation that were in turn averaged according to trials leading to fast and slow reaction times (using a median split of individual subject data for each experimental condition. Responses to multisensory stimulus pairs were contrasted with each unisensory response as well as summed responses from the constituent unisensory conditions. Behavioral analyses indicated that neural response interactions were only implicated in the case of trials producing fast reaction times, as evidenced by facilitation in excess of probability summation. In agreement, supra-additive non-linear neural response interactions between multisensory and the sum of the constituent unisensory stimuli were evident over the 40-84ms post-stimulus period only when reaction times were fast, whereas subsequent effects (86-128ms were observed independently of reaction time speed. Distributed source estimations further revealed that these earlier effects followed from supra-additive modulation of activity within posterior superior temporal cortices. These results indicate the behavioral relevance of early multisensory phenomena.
Chan, Y W; McLeod, J G; Tuck, R R; Feary, P A
Brain stem auditory evoked responses (BAERs) were performed on 25 alcoholic patients with Wernicke-Korsakoff syndrome, 56 alcoholic patients without Wernicke-Korsakoff syndrome, 24 of whom had cerebellar ataxia, and 37 control subjects. Abnormal BAERs were found in 48% of patients with Wernicke-Korsakoff syndrome, in 25% of alcoholic patients without Wernicke-Korsakoff syndrome but with cerebellar ataxia, and in 13% of alcoholic patients without Wernicke-Korsakoff syndrome or ataxia. The mean...
Full Text Available Johan Källstrand1, Olle Olsson2, Sara Fristedt Nehlstedt1, Mia Ling Sköld1, Sören Nielzén21SensoDetect AB, Lund, Sweden; 2Department of Clinical Neuroscience, Section of Psychiatry, Lund University, Lund, SwedenAbstract: Abnormal auditory information processing has been reported in individuals with autism spectrum disorders (ASD. In the present study auditory processing was investigated by recording auditory brainstem responses (ABRs elicited by forward masking in adults diagnosed with Asperger syndrome (AS. Sixteen AS subjects were included in the forward masking experiment and compared to three control groups consisting of healthy individuals (n = 16, schizophrenic patients (n = 16 and attention deficit hyperactivity disorder patients (n = 16, respectively, of matching age and gender. The results showed that the AS subjects exhibited abnormally low activity in the early part of their ABRs that distinctly separated them from the three control groups. Specifically, wave III amplitudes were significantly lower in the AS group than for all the control groups in the forward masking condition (P < 0.005, which was not the case in the baseline condition. Thus, electrophysiological measurements of ABRs to complex sound stimuli (eg, forward masking may lead to a better understanding of the underlying neurophysiology of AS. Future studies may further point to specific ABR characteristics in AS individuals that separate them from individuals diagnosed with other neurodevelopmental diseases.Keywords: asperger syndrome, auditory brainstem response, forward masking, psychoacoustics
Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.
McCormick, B; Curnock, D A; Spavins, F
The Linco-Bennett auditory response cradle is a microprocessor controlled device for screening the hearing of neonates. A total of 396 neonates admitted to a special care unit were tested on the cradle and later followed up in a comprehensive test programme between the ages of 3 months and 8 months. Altogether 374 (94%) were available for follow up. The use of the cradle resulted in the detection of six neonates with appreciable deafness. One neonate who passed the cradle test has severe bilateral hearing impairment. The false alarm rate for neonates failing two tests on the cradle but having normal hearing at follow up was 4.3%. The auditory response cradle was designed for use in mass screening programmes but testing the hearing of all newborns would require many staff. It is argued that this is unrealistic when resources are scarce, but that neonates in high risk groups should have their hearing screened at birth by an objective test such as this. The cradle has considerable potential but its method of use and the 'decision making' programme could be improved. PMID:6524947
Smulders, Tom V; Jarvis, Erich D
Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli.
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S
Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control. Copyright © 2014 Elsevier Inc. All rights reserved.
Shiell, Martha M; Champoux, François; Zatorre, Robert J
Cross-modal reorganization after sensory deprivation is a model for understanding brain plasticity. Although it is a well-documented phenomenon, we still know little of the mechanisms underlying it or the factors that constrain and promote it. Using fMRI, we identified visual motion-related activity in 17 early-deaf and 17 hearing adults. We found that, in the deaf, the posterior superior temporal gyrus (STG) was responsive to visual motion. We compared functional connectivity of this reorganized cortex between groups to identify differences in functional networks associated with reorganization. In the deaf more than the hearing, the STG displayed increased functional connectivity with a region in the calcarine fissure. We also explored the role of hearing aid use, a factor that may contribute to variability in cross-modal reorganization. We found that both the cross-modal activity in STG and the functional connectivity between STG and calcarine cortex correlated with duration of hearing aid use, supporting the hypothesis that residual hearing affects cross-modal reorganization. We conclude that early auditory deprivation alters not only the organization of auditory regions but also the interactions between auditory and primary visual cortex and that auditory input, as indexed by hearing aid use, may inhibit cross-modal reorganization in early-deaf people.
Stina Cornell Kärnekull
Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
Yeong Ro Lee
Full Text Available Diabetes mellitus (DM is a metabolic disease that involves disorders such as diabetic retinopathy, diabetic neuropathy, and diabetic hearing loss. Recently, neurotrophin has become a treatment target that has shown to be an attractive alternative in recovering auditory function altered by DM. The aim of this study was to evaluate the effect of DA9801, a mixture of Dioscorea nipponica and Dioscorea japonica extracts, in the auditory function damage produced in a STZ-induced diabetic model and to provide evidence of the mechanisms involved in enhancing these protective effects. We found a potential application of DA9801 on hearing impairment in the STZ-induced diabetic model, demonstrated by reducing the deterioration produced by DM in ABR threshold in response to clicks and normalizing wave I–IV latencies and Pa latencies in AMLR. We also show evidence that these effects might be elicited by inducing NGF related through Nr3c1 and Akt. Therefore, this result suggests that the neuroprotective effects of DA9801 on the auditory damage produced by DM may be affected by NGF increase resulting from Nr3c1 via Akt transformation.
Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.
Crowell, Sara E.; Berlin, Alicia; Carr, Catherine E; Olsen, Glenn H.; Therrien, Ronald E; Yannuzzi, Sally E; Ketten, Darlene R
There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al., Proc Natl Acad Sci USA 63:676–680, 1969). We, therefore, measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of the greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e., frequency at the greatest intensity, of all species' vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range.
Karakaş, Sirel; Cakmak, Emine D; Bekçi, Belma; Aydin, Hamdullah
The goal of the study was to investigate the contribution of the delta and theta responses to the peaks on the event-related potential waveform and specifically to find the possible cognitive correlates of these oscillatory responses in rapid eye movements (REM) sleep and Stage 2 (spindle sleep), Stage 3 (light sleep) and Stage 4 (deep sleep; slow wave sleep) of non-REM sleep. Data on overnight sleep was acquired from 12 healthy, young adult, volunteer males; those on awake stage were obtained from 19 matched males. Brain activity was obtained in response to auditory stimuli (2000 Hz deviant and 1000 Hz standard stimuli: 65 dB, 10 ms r/f time, 50 ms duration) under passive oddball paradigm in sleep, active and passive oddball (OB-a, OB-p, respectively) paradigms in wakefulness. The effect of the experimental variables (stimulus type, sleep stage) was studied using 2 x 4 analysis of variance for repeated measures and stepwise multiple regression analysis. Overall, three types of configurations were obtained for the oscillatory responses which varied according to sleep stage and stimulus type: Large amplitude, differentiated delta and distinct theta response of long duration; distinct theta response with short duration; distinct delta response. As in wakefulness, the morphology of the time-domain peaks was found to be due to the superposition of the delta and theta responses. The configuration in REM resembled the responses to the OB-p paradigm and that NREM stages resembled the responses to the OB-a paradigm in wakefulness. Auditory information processing selectively varied according to sleep stages and took longer in sleep. Comparable peaks were obtained at longer latencies and later components appeared that did not exist under wakefulness. With respect to the long-duration theta activity, and greater differentiation between the deviant- and standard-elicited stimuli, Stage 2 appeared to represent the more effortful cognitive processing.
Matragrano, Lisa L; LeBlanc, Meredith M; Chitrapu, Anjani; Blanton, Zane E; Maney, Donna L
Behavioral responses to social stimuli often vary according to endocrine state. Our previous work has suggested that such changes in behavior may be due in part to hormone-dependent sensory processing. In the auditory forebrain of female white-throated sparrows, expression of the immediate early gene ZENK (egr-1) is higher in response to conspecific song than to a control sound only when plasma estradiol reaches breeding-typical levels. Estradiol also increases the number of detectable noradrenergic neurons in the locus coeruleus and the density of noradrenergic and serotonergic fibers innervating auditory areas. We hypothesize, therefore, that reproductive hormones alter auditory responses by acting on monoaminergic systems. This possibility has not been examined in males. Here, we treated non-breeding male white-throated sparrows with testosterone to mimic breeding-typical levels and then exposed them to conspecific male song or frequency-matched tones. We observed selective ZENK responses in the caudomedial nidopallium only in the testosterone-treated males. Responses in another auditory area, the caudomedial mesopallium, were selective regardless of hormone treatment. Testosterone treatment reduced serotonergic fiber density in the auditory forebrain, thalamus, and midbrain, and although it increased the number of noradrenergic neurons detected in the locus coeruleus, it reduced noradrenergic fiber density in the auditory midbrain. Thus, whereas we previously reported that estradiol enhances monoaminergic innervation of the auditory pathway in females, we show here that testosterone decreases it in males. Mechanisms underlying testosterone-dependent selectivity of the ZENK response may differ from estradiol-dependent ones
Henry, Kenneth S; Kale, Sushrut; Scheidt, Ryan E; Heinz, Michael G
Noninvasive auditory brainstem responses (ABRs) are commonly used to assess cochlear pathology in both clinical and research environments. In the current study, we evaluated the relationship between ABR characteristics and more direct measures of cochlear function. We recorded ABRs and auditory nerve (AN) single-unit responses in seven chinchillas with noise-induced hearing loss. ABRs were recorded for 1-8 kHz tone burst stimuli both before and several weeks after 4 h of exposure to a 115 dB SPL, 50 Hz band of noise with a center frequency of 2 kHz. Shifts in ABR characteristics (threshold, wave I amplitude, and wave I latency) following hearing loss were compared to AN-fiber tuning curve properties (threshold and frequency selectivity) in the same animals. As expected, noise exposure generally resulted in an increase in ABR threshold and decrease in wave I amplitude at equal SPL. Wave I amplitude at equal sensation level (SL), however, was similar before and after noise exposure. In addition, noise exposure resulted in decreases in ABR wave I latency at equal SL and, to a lesser extent, at equal SPL. The shifts in ABR characteristics were significantly related to AN-fiber tuning curve properties in the same animal at the same frequency. Larger shifts in ABR thresholds and ABR wave I amplitude at equal SPL were associated with greater AN threshold elevation. Larger reductions in ABR wave I latency at equal SL, on the other hand, were associated with greater loss of AN frequency selectivity. This result is consistent with linear systems theory, which predicts shorter time delays for broader peripheral frequency tuning. Taken together with other studies, our results affirm that ABR thresholds and wave I amplitude provide useful estimates of cochlear sensitivity. Furthermore, comparisons of ABR wave I latency to normative data at the same SL may prove useful for detecting and characterizing loss of cochlear frequency selectivity.
Zhou, M; Wang, N Y
Speech evoked auditory brainstem response(s-ABR)is evoked by compound syllable, and those stimulus are similar to the daily language which convey both semantic information and non-semantic information. Speech coding program can take place at brainstem. As a new method, s-ABR may reveal the mystery of speech coding program. Many tests have proved that s-ABR is somehow related to cognitive ability. We mainly illustrated the possibility of grading the cognitive ability using s-ABR, the abnormal test result from those cognitive disorders, and the family factors that contribute to cognitive disorder.
Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian
to neutralize the charge induced during the cathodic phase. Single-neuron recordings in cat auditory nerve using monophasic electrical stimulation show, however, that both phases in isolation can generate an AP. The site of AP generation differs for both phases, being more central for the anodic phase and more...... perception of CI listeners, a model needs to incorporate the correct responsiveness of the AN to anodic and cathodic polarity. Previous models of electrical stimulation have been developed based on AN responses to symmetric biphasic stimulation or to monophasic cathodic stimulation. These models, however......, fail to correctly predict responses to anodic stimulation. This study presents a model that simulates AN responses to anodic and cathodic stimulation. The main goal was to account for the data obtained with monophasic electrical stimulation in cat AN. The model is based on an exponential integrate...
Budd, Timothy W; Timora, Justin R
Recent research suggests that multisensory integration may occur at an early phase in sensory processing and within cortical regions traditionally though to be exclusively unisensory. Evidence from perceptual and electrophysiological studies indicate that the cross modal temporal correspondence of multisensory stimuli plays a fundamental role in the cortical integration of information across separate sensory modalities. Further, oscillatory neural activity in sensory cortices may provide the principle mechanism whereby sensory information from separate modalities is integrated. In the present study we aimed to extend this prior research by using the steady-state EEG response (SSR) to examine whether variations in the cross-modality temporal correspondence of amplitude modulated auditory and vibrotactile stimulation are apparent in SSR activity to multisensory stimulation. To achieve this we varied the cross-modal congruence of modulation rate for passively and simultaneously presented amplitude modulated auditory and vibrotactile stimuli. In order to maximise the SSR response in both modalities 21 and 40 Hz modulation rates were selected. Consistent with prior SSR studies, the present results showed clear evidence of phase-locking for EEG frequencies corresponding to the modulation rate of auditory and vibrotactile stimulation. As also found previously, the optimal modulation rate for SSR activity differed according to the modality, being greater at 40 Hz for auditory responses and greater at 21 Hz for vibrotactile responses. Despite consistent and reliable changes in SSR activity with manipulations of modulation rate within modality, the present study failed to provide strong evidence of multisensory interactions in SSR activity for temporally congruent, relative to incongruent, cross modal conditions. The results are discussed in terms of the role of attention as a possible factor in reconciling inconsistencies in SSR studies of multisensory integration. Crown
Bellier, Ludovic; Veuillet, Evelyne; Vesson, Jean-François; Bouchet, Patrick; Caclin, Anne; Thai-Van, Hung
Millions of people across the world are hearing impaired, and rely on hearing aids to improve their everyday life. Objective audiometry could optimize hearing aid fitting, and is of particular interest for non-communicative patients. Speech Auditory Brainstem Response (speech ABR), a fine electrophysiological marker of speech encoding, is presently seen as a promising candidate for implementing objective audiometry; yet, unlike lower-frequency auditory-evoked potentials (AEPs) such as cortical AEPs or auditory steady-state responses (ASSRs), aided-speech ABRs (i.e., speech ABRs through hearing aid stimulation) have almost never been recorded. This may be due to their high-frequency components requesting a high temporal precision of the stimulation. We assess here a new approach to record high-quality and artifact-free speech ABR while stimulating directly through hearing aids. In 4 normal-hearing adults, we recorded speech ABR evoked by a /ba/ syllable binaurally delivered through insert earphones for quality control or through hearing aids. To assess the presence of a potential stimulus artifact, recordings were also done in mute conditions with the exact same potential sources of stimulus artifacts as in the main runs. Hearing aid stimulation led to artifact-free speech ABR in each participant, with the same quality as when using insert earphones, as shown with signal-to-noise (SNR) measurements. Our new approach consisting in directly transmitting speech stimuli through hearing aids allowed for a perfect temporal precision mandatory in speech ABR recordings, and could thus constitute a decisive step in hearing impairment investigation and in hearing aid fitting improvement. Copyright © 2015 Elsevier B.V. All rights reserved.
Priscilla Augusta Monteiro Ferronato
Full Text Available Given that the auditory system is rather well developed at the end of the third trimester of pregnancy, it is likely that couplings between acoustics and motor activity can be integrated as early as at the beginning of postnatal life. The aim of the present mini-review was to summarize and discuss studies on early auditory-motor integration, focusing particularly on upper-limb movements (one of the most crucial means to interact with the environment in association with auditory stimuli, to develop further understanding of their significance with regard to early infant development. Many studies have investigated the relationship between various infant behaviors (e.g., sucking, visual fixation, head turning and auditory stimuli, and established that human infants can be observed displaying couplings between action and environmental sensory stimulation already from just after birth, clearly indicating a propensity for intentional behavior. Surprisingly few studies, however, have investigated the associations between upper-limb movements and different auditory stimuli in newborns and young infants, infants born at risk for developmental disorders/delays in particular. Findings from studies of early auditory-motor interaction support that the developing integration of sensory and motor systems is a fundamental part of the process guiding the development of goal-directed action in infancy, of great importance for continued motor, perceptual and cognitive development. At-risk infants (e.g., those born preterm may display increasing central auditory processing disorders, negatively affecting early sensory-motor integration, and resulting in long-term consequences on gesturing, language development and social communication. Consequently, there is a need for more studies on such implications
Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian
Cochlear implants (CI) directly stimulate the auditory nerve (AN), bypassing the mechano-electrical transduction in the inner ear. Trains of biphasic, charge balanced pulses (anodic and cathodic) are used as stimuli to avoid damage of the tissue. The pulses of either polarity are capable of produ......Cochlear implants (CI) directly stimulate the auditory nerve (AN), bypassing the mechano-electrical transduction in the inner ear. Trains of biphasic, charge balanced pulses (anodic and cathodic) are used as stimuli to avoid damage of the tissue. The pulses of either polarity are capable......μs, which is large enough to affect the temporal coding of sounds and hence, potentially, the communication abilities of the CI listener. In the present study, two recently proposed models of electric stimulation of the AN [1,2] were considered in terms of their efficacy to predict the spike timing...... for anodic and cathodic stimulation of the AN of cat . The models’ responses to the electrical pulses of various shapes [4,5,6] were also analyzed. It was found that, while the models can account for the firing rates in response to various biphasic pulse shapes, they fail to correctly describe the timing...
Full Text Available Background and Aim: In view of improvement in therapeutic outcome of cancer treatment in children resulting in increased survival rates and the importance of hearing in speech and language development, this research project was intended to assess the effects of cisplatin group on hearing ability in children aged 6 months to 12 years.Methods: In this cross-sectional study, hearing of 10 children on cisplatin group medication for cancer who met the inclusion criteria was examined by recording auditory brainstem responses (ABR using the three stimulants of click and 4 and 8 kHz tone bursts. All children were examined twice: before drug administration and within 72 hours after receiving the last dose. Then the results were compared with each other.Results: There was a significant difference between hearing thresholds before and after drug administration (p<0.05. Right and left ear threshold comparison revealed no significant difference.Conclusion: Ototoxic effects of cisplatin group were confirmed in this study. Insignificant difference observed in comparing right and left ear hearing thresholds could be due to small sample size. auditory brainstem responses test especially with frequency specificity proved to be a useful method in assessing cisplatin ototoxicity.
CHEN Aiting; LIANG Sichao; ZHANG Ruining; GUO Weiwei; ZHOU Qiyou; JI Fei
Objective To analyze the characteristics of auditory brainstem response (ABR) in presbycusis patients el-der than 90 years. Methods Fourteen presbycusis patients elder than 90 years (presbycusis group, 91.1.4 ± 1.3 years, 26 ears) and 9 normal-hearing young adults (control group, 22.7 ± 1.2 years, 18 ears) participated in the study. Alternative click-evoked ABRs were recorded in both groups. The peak latency (PL) of peak I,Ⅲ, and V, and the inter-peak latency (IPI) of I-Ⅲ,Ⅲ-V, and I-V were compared between groups. Results In elder presbycusis patients, the occurrence rate of peak I andⅢwere both 76.9%, and that of peak V was 84.6%. In presbycusis group, the peak latencies of I, Ⅲ, V were significantly longer than that of control group (P<0.001). There was no significant difference between groups in the IPI of peak I-IⅢ (P=0.298, peakⅢ-V (P=0.254) and peak I-V (P=0.364). Conclusions Auditory brainstem responses in presbycusis pa-tients elder than 90 years showed worse wave differentiation.
Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal
Full Text Available Background and Aim: Following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In order to investigate this process, event-related potentials provide accurate information about time course neural activation as well as perception and cognitive processes. In this study, the latency and amplitude of auditory P300 were compared in sighted and early blind individuals in age range of 18-25 years old.Methods: In this cross-sectional study, auditory P300 potential was measured in conventional oddball paradigm by using two tone burst stimuli (1000 and 2000 Hz on 40 sighted subjects and 19 early blind subjects with mean age 20.94 years old.Results: The mean latency of P300 in early blind subjects was significantly smaller than sighted subjects (p=0.00.( There was no significant difference in amplitude between two groups (p>0.05.Conclusion: Reduced latency of P300 in early blind subjects in comparison to sighted subjects probably indicates the rate of automatic processing and information categorization is faster in early blind subjects because of sensory compensation. It seems that neural plasticity increases the rate of auditory processing and attention in early blind subjects.
Sersen, E A; Heaney, G; Clausen, J; Belser, R; Rainbow, S
Brainstem auditory-evoked responses (BAER) were obtained from 46 control, 16 Down's syndrome, and 48 autistic male subjects. Six Down's syndrome and 37 autistic subjects were tested with sedation. Sedated and unsedated Down's syndrome subjects displayed shorter absolute and interpeak latencies for early components of the BAER whereas the sedated autistic group showed longer latencies for the middle and late components. The prolongation of latencies in the sedated autistic group was unrelated to age or intellectual level. Although individuals requiring sedation may have a higher probability of neurological impairment, an effect of sedation on the BAER cannot be ruled out.
Ansari, Mohammad Shamim; Rangasayee, R
Speech-evoked auditory brainstem responses (spABRs) provide considerable information of clinical relevance to describe auditory processing of complex stimuli at the sub cortical level. The substantial research data have suggested faithful representation of temporal and spectral characteristics of speech sounds. However, the spABR are known to be affected by acoustic properties of speech, language experiences and training. Hence, there exists indecisive literature with regards to brainstem speech processing. This warrants establishment of language specific speech stimulus to describe the brainstem processing in specific oral language user. The objective of current study is to develop Hindi speech stimuli for recording auditory brainstem responses. The Hindi stop speech of 40 ms containing five formants was constructed. Brainstem evoked responses to speech sound |da| were gained from 25 normal hearing (NH) adults having mean age of 20.9 years (SD = 2.7) in the age range of 18-25 years and ten subjects (HI) with mild SNHL of mean 21.3 years (SD = 3.2) in the age range of 18-25 years. The statistically significant differences in the mean identification scores of synthesized for speech stimuli |da| and |ga| between NH and HI were obtained. The mean, median, standard deviation, minimum, maximum and 95 % confidence interval for the discrete peaks and V-A complex values of electrophysiological responses to speech stimulus were measured and compared between NH and HI population. This paper delineates a comprehensive methodological approach for development of Hindi speech stimuli and recording of ABR to speech. The acoustic characteristic of stimulus |da| was faithfully represented at brainstem level in normal hearing adults. There was statistically significance difference between NH and HI individuals. This suggests that spABR offers an opportunity to segregate normal speech encoding from abnormal speech processing at sub cortical level, which implies that
Bolders, Anna C; Band, Guido P H; Stallen, Pieter Jan M
Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.
Khullar, Shilpa; Sood, Archana; Sood, Sanjay
There has been a manifold increase in the number of mobile phone users throughout the world with the current number of users exceeding 2 billion. However this advancement in technology like many others is accompanied by a progressive increase in the frequency and intensity of electromagnetic waves without consideration of the health consequences. The aim of our study was to advance our understanding of the potential adverse effects of GSM mobile phones on auditory brainstem responses (ABRs). 60 subjects were selected for the study and divided into three groups of 20 each based on their usage of mobile phones. Their ABRs were recorded and analysed for latency of waves I-V as well as interpeak latencies I-III, I-V and III-V (in ms). Results revealed no significant difference in the ABR parameters between group A (control group) and group B (subjects using mobile phones for maximum 30 min/day for 5 years). However the latency of waves was significantly prolonged in group C (subjects using mobile phones for 10 years for a maximum of 30 min/day) as compared to the control group. Based on our findings we concluded that long term exposure to mobile phones may affect conduction in the peripheral portion of the auditory pathway. However more research needs to be done to study the long term effects of mobile phones particularly of newer technologies like smart phones and 3G.
Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui
Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may
Spanos, Nicholas P.; And Others
The effects of several attitudinal, cognitive skill, and personality variables in response to auditory and visual hallucination suggestions to hypnotic subjects are assessed. Cooperative attitudes toward hypnosis and involvement in everyday imaginative activities (absorption) correlated with response to auditory and visual hallucination…
Källstrand, Johan; Olsson, Olle; Nehlstedt, Sara Fristedt; Sköld, Mia Ling; Nielzén, Sören
Abnormal auditory information processing has been reported in individuals with autism spectrum disorders (ASD). In the present study auditory processing was investigated by recording auditory brainstem responses (ABRs) elicited by forward masking in adults diagnosed with Asperger syndrome (AS). Sixteen AS subjects were included in the forward masking experiment and compared to three control groups consisting of healthy individuals (n = 16), schizophrenic patients (n = 16) and attention deficit hyperactivity disorder patients (n = 16), respectively, of matching age and gender. The results showed that the AS subjects exhibited abnormally low activity in the early part of their ABRs that distinctly separated them from the three control groups. Specifically, wave III amplitudes were significantly lower in the AS group than for all the control groups in the forward masking condition (P < 0.005), which was not the case in the baseline condition. Thus, electrophysiological measurements of ABRs to complex sound stimuli (eg, forward masking) may lead to a better understanding of the underlying neurophysiology of AS. Future studies may further point to specific ABR characteristics in AS individuals that separate them from individuals diagnosed with other neurodevelopmental diseases. PMID:20628629
Lampar, Alexa; Lange, Kathrin
Temporal-cuing studies show faster responding to stimuli at an attended versus unattended time point. Whether the mechanisms involved in this temporal orienting of attention are located early or late in the processing stream has not been answered unequivocally. To address this question, we measured event-related potentials in two versions of an auditory temporal cuing task: Stimuli at the uncued time point either required a response (Experiment 1) or did not (Experiment 2). In both tasks, attention was oriented to the cued time point, but attention could be selectively focused on the cued time point only in Experiment 2. In both experiments, temporal orienting was associated with a late positivity in the timerange of the P3. An early enhancement in the timerange of the auditory N1 was observed only in Experiment 2. Thus, temporal attention improves auditory processing at early sensory levels only when it can be focused selectively.
Sheelu S Siddiqi
Full Text Available Objective: Diabetes mellitus (DM causes pathophysiological changes at multiple organ system. With evoked potential techniques, the brain stem auditory response represents a simple procedure to detect both acoustic nerve and central nervous system pathway damage. The objective was to find the evidence of central neuropathy in diabetes patients by analyzing brainstem audiometry electric response obtained by auditory evoked potentials, quantify the characteristic of auditory brain response in long standing diabetes and to study the utility of auditory evoked potential in detecting the type, site, and nature of lesions. Design: A total of 25 Type-2 DM [13 (52% males and 12 (48% females] with duration of diabetes over 5 years and aged over 30 years. The brainstem evoked response audiometry (BERA was performed by universal smart box manual version 2.0 at 70, 80, and 90 dB. The wave latency pattern and interpeak latencies were estimated. This was compared with 25 healthy controls (17 [68%] males and 8 [32%] females. Result: In Type-2 DM, BERA study revealed that wave-III representing superior olivary complex at 80 dB had wave latency of (3.99 ± 0.24 ms P < 0.001, at 90 dB (3.92 ± 0.28 ms P < 0.001 compared with control. The latency of wave III was delayed by 0.39, 0.42, and 0.42 ms at 70, 80, and 90 dB, respectively. The absolute latency of wave V representing inferior colliculus at 70 dB (6.05 ± 0.27 ms P < 0.001, at 80 dB (5.98 ± 0.27 P < 0.001, and at 90 dB (6.02 ± 0.30 ms P < 0.002 compared with control. The latency of wave-V was delayed by 0.48, 0.47, and 0.50 ms at 70, 80, and 90 dB, respectively. Interlatencies I-III at 70 dB (2.33 ± 0.22 ms P < 0.001, at 80 dB (2.39 ± 0.26 ms P < 0.001, while at 90 dB (2.47 ± 0.25 ms P < 0.001 when compared with control. Interlatencies I-V at 70 dB (4.45 ± 0.29 ms P < 0.001 at 80 dB (4.39 ± 0.34 ms P < 0.001, and at 90 dB (4.57 ± 0.31 ms P < 0.001 compared with control. Out of 25 Type-2 DM, 13 (52
Engineer, Crystal T; Rahebi, Kimiya C; Buell, Elizabeth P; Fink, Melyssa K; Kilgard, Michael P
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. Copyright © 2015 Elsevier B.V. All rights reserved.
Jenner, JA; van de Willige, G
Objective: Early intervention in psychosis is considered important in relapse prevention. Limited results of monotherapies prompt to development of multimodular programmes. The present study tests feasibility and effectiveness of HIT, an integrative early intervention treatment for auditory hallucin
Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able
Bruns, Patrick; Liebnau, Ronja; Röder, Brigitte
In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.
Puschmann, Sebastian; Özyurt, Jale; Uppenkamp, Stefan; Thiel, Christiane M
Previous work compellingly shows the existence of functional and structural differences in human auditory cortex related to superior musical abilities observed in professional musicians. In this study, we investigated the relationship between musical abilities and auditory cortex activity in normal listeners who had not received a professional musical education. We used functional MRI to measure auditory cortex responses related to auditory stimulation per se and the processing of pitch and pitch changes, which represents a prerequisite for the perception of musical sequences. Pitch-evoked responses in the right lateral portion of Heschl's gyrus were correlated positively with the listeners' musical abilities, which were assessed using a musical aptitude test. In contrast, no significant relationship was found for noise stimuli, lacking any musical information, and for responses induced by pitch changes. Our results suggest that superior musical abilities in normal listeners are reflected by enhanced neural encoding of pitch information in the auditory system.
Rønne, Filip Munch; Gøtsche-Rasmussen, Kristian
This study investigates the frequency specific contribution to the auditory brainstem response (ABR) of chirp stimuli. Frequency rising chirps were designed to compensate for the cochlear traveling wave delay, and lead to larger wave-V amplitudes than for click stimuli as more auditory nerve fibr...
Bonte, Milene; Hausfeld, Lars; Scharke, Wolfgang; Valente, Giancarlo; Formisano, Elia
Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subject's behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschl's gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex.
Dunn, Walter; Rassovsky, Yuri; Wynn, Jonathan; Wu, Allan D; Iacoboni, Marco; Hellemann, Gerhard; Green, Michael F
Transcranial direct current stimulation (tDCS) was applied bilaterally over the auditory cortex in 12 schizophrenia patients to modulate early auditory processing. Performance on a tone discrimination task (tone-matching task-TMT) and auditory mismatch negativity were assessed after counterbalanced anodal, cathodal, and sham tDCS. Cathodal stimulation improved TMT performance (p stimulation condition by negative symptom interaction in which greater negative symptoms were associated with a better TMT performance after anodal tDCS.
Full Text Available Abstract Background Prepulse inhibition (PPI of the startle response is an important tool to investigate the biology of schizophrenia. PPI is usually observed by use of a startle reflex such as blinking following an intense sound. A similar phenomenon has not been reported for cortical responses. Results In 12 healthy subjects, change-related cortical activity in response to an abrupt increase of sound pressure by 5 dB above the background of 65 dB SPL (test stimulus was measured using magnetoencephalography. The test stimulus evoked a clear cortical response peaking at around 130 ms (Change-N1m. In Experiment 1, effects of the intensity of a prepulse (0.5 ~ 5 dB on the test response were examined using a paired stimulation paradigm. In Experiment 2, effects of the interval between the prepulse and test stimulus were examined using interstimulus intervals (ISIs of 50 ~ 350 ms. When the test stimulus was preceded by the prepulse, the Change-N1m was more strongly inhibited by a stronger prepulse (Experiment 1 and a shorter ISI prepulse (Experiment 2. In addition, the amplitude of the test Change-N1m correlated positively with both the amplitude of the prepulse-evoked response and the degree of inhibition, suggesting that subjects who are more sensitive to the auditory change are more strongly inhibited by the prepulse. Conclusions Since Change-N1m is easy to measure and control, it would be a valuable tool to investigate mechanisms of sensory gating or the biology of certain mental diseases such as schizophrenia.
Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Nir, Yuval; Vyazovskiy, Vladyslav V; Cirelli, Chiara; Banks, Matthew I; Tononi, Giulio
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic "gate," which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13-20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas.
Bianca C R de Castro; Heraldo L Guida; Adriano L Roque; Luiz Carlos de Abreu; Celso Ferreira; Renata S Marcomini; Carlos B M Monteiro; Fernando Adami; Viviane F Ribeiro; Fernando L A Fonseca; Vilma N S Santos; Vitor E Valenti
...) during the musical auditory stimulation. The objective is to investigate the acute effects of classic musical auditory stimulation on the geometric indexes of HRV in women in response to the postural change maneuver (PCM...
Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio
Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011
Jaspers-Fayer, Fern; Ertl, Matthias; Leicht, Gregor; Leupelt, Anne; Mulert, Christoph
Event-related potential (ERP) studies in the visual domain often report an emotion-evoked early posterior negativity (EPN). Studies in the auditory domain have recently shown a similar component. Little source localization has been done on the visual EPN, and no source localization has been done on the auditory EPN. The aim of the current study was to identify the neural generators of the auditory EPN using EEG-fMRI single-trial coupling. Data were recorded from 19 subjects who completed three auditory choice reaction tasks: (1) a control task using neutral tones; (2) a prosodic emotion task involving the categorization of syllables; and (3) a semantic emotion task involving the categorization of words. The waveforms of the emotion tasks diverged from the neutral task over parietal scalp during a very early time window (132-156 ms) and later during a more traditional EPN time window (252-392 ms). In the EEG-fMRI analyses, the variance of the voltage in the earlier time window was correlated with activity in the medial prefrontal cortex, but only in the word task. In the EEG-fMRI analyses of the traditional EPN time window both emotional tasks covaried with activity in the left superior parietal lobule. Our results support previous parietal cortex source localization findings for the visual EPN, and suggest enhanced selective attention to emotional stimuli during the EPN time window. Copyright © 2012 Elsevier Inc. All rights reserved.
Schwarz, D W F; Taylor, P
Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.
Wolf, Sarah E; Swaddle, John P; Cristol, Daniel A; Buchser, William J
...) by studying their auditory brainstem responses (ABRs). Zebra finches exposed to mercury exhibited elevated hearing thresholds, decreased amplitudes, and longer latencies in the ABR, the first evidence of mercury-induced hearing impairment in birds...
Keesling, Devan A; Parker, Jordan Paige; Sanchez, Jason Tait
iChirp-evoked auditory brainstem responses (ABRs) yield a larger wave V amplitude at low intensity levels than traditional broadband click stimuli, providing a reliable estimation of hearing sensitivity...
Topography of acoustic response characteristics in the auditory cortex (AC) of the Kunming (KM) mouse has been examined by using microelectrode recording techniques.Based on best-frequency (BF) maps,both the primary auditory field (AⅠ) and the anterior auditory field (AAF) are tonotopically organized with a counter running frequency gradient.Within an isofrequency stripe,the width of the frequency-threshold curves of single neurons increases,and minimum threshold (MT) decreases towards more ventral locations.BFs in AⅠand AAF range from 4 to 38 kHz.Auditory neurons with BFs above 40 kHz are located at the rostrodorsal part of the AC.The findings suggest that the KM mouse is a good model suitable for auditory research.
Fuchigami, Tatsuo; Okubo, Osami; Fujita, Yukihiko; Kohira, Ryutaro; Arakawa, Chikako; Endo, Ayumi; Haruyama, Wakako; Imai, Yuki; Mugishima, Hideo
To evaluate auditory spatial cognitive function, age correlations for event-related potentials (ERPs) in response to auditory stimuli with a Doppler effect were studied in normal children. A sound with a Doppler effect is perceived as a moving audio image. A total of 99 normal subjects (age range, 4-21 years) were tested. In the task-relevant oddball paradigm, P300 and key-press reaction time were elicited using auditory stimuli (1000 Hz fixed and enlarged tones with a Doppler effect). From the age of 4 years, the P300 latency for the enlarged tone with a Doppler effect shortened more rapidly with age than did the P300 latency for tone-pips, and the latencies for the different conditions became similar towards the late teens. The P300 of auditory stimuli with a Doppler effect may be used to evaluate auditory spatial cognitive function in children.
Full Text Available Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG. We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces.
Voss, Henning U.; Delanthi Salgado-Commissariat; Helekar, Santosh A.
How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD) responses to t...
Hoffmann, Susanne; Firzlaff, Uwe; Radtke-Schuller, Susanne; Schwellnus, Britta; Schuller, Gerd
The mammalian auditory cortex can be subdivided into various fields characterized by neurophysiological and neuroarchitectural properties and by connections with different nuclei of the thalamus. Besides the primary auditory cortex, echolocating bats have cortical fields for the processing of temporal and spectral features of the echolocation pulses. This paper reports on location, neuroarchitecture and basic functional organization of the auditory cortex of the microchiropteran bat Phyllostomus discolor (family: Phyllostomidae). The auditory cortical area of P. discolor is located at parieto-temporal portions of the neocortex. It covers a rostro-caudal range of about 4800 mum and a medio-lateral distance of about 7000 mum on the flattened cortical surface. The auditory cortices of ten adult P. discolor were electrophysiologically mapped in detail. Responses of 849 units (single neurons and neuronal clusters up to three neurons) to pure tone stimulation were recorded extracellularly. Cortical units were characterized and classified depending on their response properties such as best frequency, auditory threshold, first spike latency, response duration, width and shape of the frequency response area and binaural interactions. Based on neurophysiological and neuroanatomical criteria, the auditory cortex of P. discolor could be subdivided into anterior and posterior ventral fields and anterior and posterior dorsal fields. The representation of response properties within the different auditory cortical fields was analyzed in detail. The two ventral fields were distinguished by their tonotopic organization with opposing frequency gradients. The dorsal cortical fields were not tonotopically organized but contained neurons that were responsive to high frequencies only. The auditory cortex of P. discolor resembles the auditory cortex of other phyllostomid bats in size and basic functional organization. The tonotopically organized posterior ventral field might represent the
Full Text Available Abstract Background The mammalian auditory cortex can be subdivided into various fields characterized by neurophysiological and neuroarchitectural properties and by connections with different nuclei of the thalamus. Besides the primary auditory cortex, echolocating bats have cortical fields for the processing of temporal and spectral features of the echolocation pulses. This paper reports on location, neuroarchitecture and basic functional organization of the auditory cortex of the microchiropteran bat Phyllostomus discolor (family: Phyllostomidae. Results The auditory cortical area of P. discolor is located at parieto-temporal portions of the neocortex. It covers a rostro-caudal range of about 4800 μm and a medio-lateral distance of about 7000 μm on the flattened cortical surface. The auditory cortices of ten adult P. discolor were electrophysiologically mapped in detail. Responses of 849 units (single neurons and neuronal clusters up to three neurons to pure tone stimulation were recorded extracellularly. Cortical units were characterized and classified depending on their response properties such as best frequency, auditory threshold, first spike latency, response duration, width and shape of the frequency response area and binaural interactions. Based on neurophysiological and neuroanatomical criteria, the auditory cortex of P. discolor could be subdivided into anterior and posterior ventral fields and anterior and posterior dorsal fields. The representation of response properties within the different auditory cortical fields was analyzed in detail. The two ventral fields were distinguished by their tonotopic organization with opposing frequency gradients. The dorsal cortical fields were not tonotopically organized but contained neurons that were responsive to high frequencies only. Conclusion The auditory cortex of P. discolor resembles the auditory cortex of other phyllostomid bats in size and basic functional organization. The
Bahmer, Andreas; Peter, Otto; Baumann, Uwe
Electrical auditory brainstem responses (E-ABRs) of subjects with cochlear implants are used for monitoring the physiologic responses of early signal processing of the auditory system. Additionally, E-ABR measurements allow the diagnosis of retro-cochlear diseases. Therefore, E-ABR should be available in every cochlear implant center as a diagnostic tool. In this paper, we introduce a low-cost setup designed to perform an E-ABR as well as a conventional ABR for research purposes. The distributable form was developed with Matlab and the Matlab Compiler (The Mathworks Inc.). For the ABR, only a PC with a soundcard, conventional system headphones, and an EEG pre-amplifier are necessary; for E-ABR, in addition, an interface to the cochlea implant is required. For our purposes, we implemented an interface for the Combi 40+/Pulsar implant (MED-EL, Innsbruck).
Hertrich, Ingo; Mathiak, Klaus; Lutzenberger, Werner; Ackermann, Hermann
Cross-modal fusion phenomena suggest specific interactions of auditory and visual sensory information both within the speech and nonspeech domains. Using whole-head magnetoencephalography, this study recorded M50 and M100 fields evoked by ambiguous acoustic stimuli that were visually disambiguated to perceived /ta/ or /pa/ syllables. As in natural speech, visual motion onset preceded the acoustic signal by 150 msec. Control conditions included visual and acoustic nonspeech signals as well as visual-only and acoustic-only stimuli. (a) Both speech and nonspeech motion yielded a consistent attenuation of the auditory M50 field, suggesting a visually induced "preparatory baseline shift" at the level of the auditory cortex. (b) Within the temporal domain of the auditory M100 field, visual speech and nonspeech motion gave rise to different response patterns (nonspeech: M100 attenuation; visual /pa/: left-hemisphere M100 enhancement; /ta/: no effect). (c) These interactions could be further decomposed using a six-dipole model. One of these three pairs of dipoles (V270) was fitted to motion-induced activity at a latency of 270 msec after motion onset, that is, the time domain of the auditory M100 field, and could be attributed to the posterior insula. This dipole source responded to nonspeech motion and visual /pa/, but was found suppressed in the case of visual /ta/. Such a nonlinear interaction might reflect the operation of a binary distinction between the marked phonological feature "labial" versus its underspecified competitor "coronal." Thus, visual processing seems to be shaped by linguistic data structures even prior to its fusion with auditory information channel.
Swanepoel, DeWet; Erasmus, Hettie
The auditory steady-state response (ASSR) has gained popularity as an alternative technique for objective audiometry but its use in less severe degrees of hearing loss has been questioned. The aim of this study was to investigate the usefulness of the ASSR in estimating moderate degrees of hearing loss. Seven subjects (12 ears) with moderate sensorineural hearing loss between 15 and 18 years of age were enrolled in the study. Forty-eight behavioural and ASSR thresholds were obtained across the frequencies of 0.5, 1, 2, and 4 kHz. ASSR thresholds were determined using a dichotic multiple frequency recording technique. Mean threshold differences varied between 2 and 8 dB (+/-7-10 dB SD) across frequencies. The highest difference and variability was recorded at 0.5 kHz. The frequencies 1-4 kHz also revealed significantly better correlations (0.74-0.88) compared to 0.5 kHz (0.31). Comparing correlation coefficients for behavioural thresholds less than 60 and 60 dB and higher revealed a significant difference. Eighty-six percent of ASSR thresholds corresponded within 5 dB of moderate to severe behavioural thresholds compared to only 29% for mild to moderate thresholds in this study. The results confirm that the ASSR can reliably estimate behavioural thresholds of 60 dB and higher, but due to increased variability, caution is recommended when estimating behavioural thresholds of less than 60 dB, especially at 0.5 kHz.
Knudsen, E I
Auditory and visual space are mapped in the optic tectum of the barn owl. Normally, these maps of space are in close mutual alignment. Ear plugs inserted unilaterally in young barn owls disrupted the binaural cues that constitute the basis of the auditory map. Yet when recordings were made from the tecta of these birds as adults, the auditory and visual maps were in register. When the ear plugs were removed from these adult birds and binaural balance was restored, the auditory maps were shifted substantially relative to the visual maps and relative to the physical borders of the tecta. These results demonstrate that the neural connectivity that gives rise to the auditory map of space in the optic tectum can be modified by experience in such a way that spatial alignment between sensory modalities is maintained.
Crystal T Engineer
Full Text Available Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes.
Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.
Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133
Roslyn Holly Fitch
Full Text Available Most researchers in the field of neural plasticity are familiar with the Kennard Principle," which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood. As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents aspects of human sensory processing that may correlate – both developmentally and functionally – with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic injuries (similar to those seen in premature infants and term infants with birth complications led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human "term," but only transient deficits (undetectable in adulthood when induced in a "preterm" window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations. Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in
Donishi, T; Kimura, A; Imbe, H; Yokoi, I; Kaneoke, Y
Recent studies have highlighted cross-modal sensory modulations in the primary sensory areas in the cortex, suggesting that cross-modal sensory interactions occur at early stages in the hierarchy of sensory processing. Multi-modal sensory inputs from non-lemniscal thalamic nuclei and cortical inputs from the secondary sensory and association areas are considered responsible for the modulations. On the other hand, there is little evidence of cross-sensory modal sensitivities in lemniscal thalamic nuclei. In the present study, we were interested in a possibility that somatosensory stimulation may affect auditory response in the ventral division (MGV) of the medial geniculate nucleus (MG), a lemniscal thalamic nucleus that is considered to be dedicated to auditory uni-modal processing. Experiments were performed on anesthetized rats. Transcutaneous electrical stimulation of the hindpaw, which is thought to evoke nociception and seems unrelated to auditory processing, modulated unit discharges in response to auditory stimulation (noise bursts). The modulation was observed in the MGV and non-lemniscal auditory thalamic nuclei such as the dorsal and medial divisions of the MG. The major effect of somatosensory stimulation was suppression. The most robust suppression was induced by electrical stimuli given simultaneously with noise bursts or preceding noise bursts by 10 to 20 ms. The results indicate that the lemniscal (MGV) and non-lemniscal auditory nuclei are subject to somatosensory influence. In everyday experience intense somatosensory stimuli such as pain interrupt our ongoing hearing or interfere with clear recognition of sound. The modulation of lemniscal auditory response by somatosensory stimulation may underlie such cross-modal disturbance of auditory perception as a form of cross-modal switching of attention. Copyright Â© 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Kayser, Jürgen; Tenke, Craig E; Kroppmann, Christopher J; Alschuler, Daniel M; Fekri, Shiva; Gil, Roberto; Jarskog, L Fredrik; Harkavy-Friedman, Jill M; Bruder, Gerard E
Existing 67-channel event-related potentials, obtained during recognition and working memory paradigms with words or faces, were used to examine early visual processing in schizophrenia patients prone to auditory hallucinations (AH, n = 26) or not (NH, n = 49) and healthy controls (HC, n = 46). Current source density (CSD) transforms revealed distinct, strongly left- (words) or right-lateralized (faces; N170) inferior-temporal N1 sinks (150 ms) in each group. N1 was quantified by temporal PCA of peak-adjusted CSDs. For words and faces in both paradigms, N1 was substantially reduced in AH compared with NH and HC, who did not differ from each other. The difference in N1 between AH and NH was not due to overall symptom severity or performance accuracy, with both groups showing comparable memory deficits. Our findings extend prior reports of reduced auditory N1 in AH, suggesting a broader early perceptual integration deficit that is not limited to the auditory modality.
Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Efrati, Adi; Gutfreund, Yoram
The auditory space map in the optic tectum (OT) (also known as superior colliculus in mammals) relies on the tuning of neurons to auditory localization cues that correspond to specific sound source locations. This study investigates the effects of early auditory experiences on the neural representation of binaural auditory localization cues. Young barn owls were raised in continuous omnidirectional broadband noise from before hearing onset to the age of ∼ 65 days. Data from these birds were compared with data from age-matched control owls and from normal adult owls (>200 days). In noise-reared owls, the tuning of tectal neurons for interaural level differences and interaural time differences was broader than in control owls. Moreover, in neurons from noise-reared owls, the interaural level differences tuning was biased towards sounds louder in the contralateral ear. A similar bias appeared, but to a much lesser extent, in age-matched control owls and was absent in adult owls. To follow the recovery process from noise exposure, we continued to survey the neural representations in the OT for an extended period of up to several months after removal of the noise. We report that all the noise-rearing effects tended to recover gradually following exposure to a normal acoustic environment. The results suggest that deprivation from experiencing normal acoustic localization cues disrupts the maturation of the auditory space map in the OT.
Bailey, Jennifer A; Penhune, Virginia B
Behavioural and neuroimaging studies provide evidence for a possible "sensitive" period in childhood development during which musical training results in long-lasting changes in brain structure and auditory and motor performance. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 (early-trained; ET) perform better on a visuomotor task than those who begin after the age of 7 (late-trained; LT), even when matched on total years of musical training and experience. Two questions were raised regarding the findings from this experiment. First, would this group performance difference be observed using a more familiar, musically relevant task such as auditory rhythms? Second, would cognitive abilities mediate this difference in task performance? To address these questions, ET and LT musicians, matched on years of musical training, hours of current practice and experience, were tested on an auditory rhythm synchronization task. The task consisted of six woodblock rhythms of varying levels of metrical complexity. In addition, participants were tested on cognitive subtests measuring vocabulary, working memory and pattern recognition. The two groups of musicians differed in their performance of the rhythm task, such that the ET musicians were better at reproducing the temporal structure of the rhythms. There were no group differences on the cognitive measures. Interestingly, across both groups, individual task performance correlated with auditory working memory abilities and years of formal training. These results support the idea of a sensitive period during the early years of childhood for developing sensorimotor synchronization abilities via musical training.
Engineer, Navzer D; Percaccio, Cherie R; Pandya, Pritesh K; Moucha, Raluca; Rathbun, Daniel L; Kilgard, Michael P
Over the last 50 yr, environmental enrichment has been shown to generate more than a dozen changes in brain anatomy. The consequences of these physical changes on information processing have not been well studied. In this study, rats were housed in enriched or standard conditions either prior to or after reaching sexual maturity. Evoked potentials from awake rats and extracellular recordings from anesthetized rats were used to document responses of auditory cortex neurons. This report details several significant, new findings about the influence of housing conditions on the responses of rat auditory cortex neurons. First, enrichment dramatically increases the strength of auditory cortex responses. Tone-evoked potentials of enriched rats, for example, were more than twice the amplitude of rats raised in standard laboratory conditions. Second, cortical responses of both young and adult animals benefit from exposure to an enriched environment and are degraded by exposure to an impoverished environment. Third, housing condition resulted in rapid remodeling of cortical responses in <2 wk. Fourth, recordings made under anesthesia indicate that enrichment increases the number of neurons activated by any sound. This finding shows that the evoked potential plasticity documented in awake rats was not due to differences in behavioral state. Finally, enrichment made primary auditory cortex (A1) neurons more sensitive to quiet sounds, more selective for tone frequency, and altered their response latencies. These experiments provide the first evidence of physiologic changes in auditory cortex processing resulting from generalized environmental enrichment.
Demopoulos, Carly; Yu, Nina; Tripp, Jennifer; Mota, Nayara; Brandes-Aitken, Anne N.; Desai, Shivani S.; Hill, Susanna S.; Antovich, Ashley D.; Harris, Julia; Honma, Susanne; Mizuiri, Danielle; Nagarajan, Srikantan S.; Marco, Elysa J.
This study compared magnetoencephalographic (MEG) imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18), those with sensory processing dysfunction (SPD; N = 13) who do not meet ASD criteria, and typically developing control (TDC; N = 19) participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain. PMID:28603492
Full Text Available This study compared magnetoencephalographic (MEG imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18, those with sensory processing dysfunction (SPD; N = 13 who do not meet ASD criteria, and typically developing control (TDC; N = 19 participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain.
Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti
To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.
Key, Alexandra P; Jones, Dorita; Peters, Sarika U
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses.
Full Text Available OBJETIVO: verificar relação existente entre os potenciais auditivos de tronco cerebral e a avaliação comportamental do processamento auditivo. MÉTODOS: foi realizada em um grupo de 60 meninas residentes de Paraíba do Sul na idade de nove a 12 anos com limiares tonais dentro dos padrões de normalidade e timpanometria tipo A com presença dos reflexos acústicos. Os testes utilizados para a avaliação comportamental do processamento auditivo foram: avaliação simplificada do processamento auditivo, teste de fala no ruído, teste de dissílabos alternados e teste dicótico não verbal. Após a avaliação do processamento auditivo, as crianças foram subdivididas em dois grupos, G1 (sem alteração no processamento auditivo e G2 (com alteração no processamento auditivo e submetidas aos potenciais auditivos de tronco cerebral. Os parâmetros utilizados na comparação dos dois grupos foram: latência absoluta das ondas I, III e V; latência interpicos das ondas I-III, I-V, III-V; diferença interaural da latência interpico I-V; e diferença interaural da latência da onda V. RESULTADOS: foram encontradas diferenças estatísticas nos parâmetros de latência interpico das ondas I-V na orelha esquerda (p=0,009, diferença interaural da latência interpico de ondas I-V (p=0,020 e diferença da latência interpico de ondas I e V da orelha direita para a esquerda entre os grupos G1 e G2 (p=0,025. CONCLUSÃO: foi possível encontrar relação dos potenciais evocados auditivos de tronco cerebral com a avaliação comportamental do processamento auditivo nos parâmetros de latência interpico entre as ondas I e V da orelha esquerda e diferença interaural da latência interpico I-V na orelha esquerda.PURPOSE: to investigate the correlation of auditory brainstem response (ABR and behavioral auditory processing evaluation. METHODS: sixty girls, from Paraíba do Sul, ranging from 9 to 12-year-old were evaluated. In order to take part in the study
Williamson, Tanika T.; Zhu, Xiaoxia; Walton, Joseph P.; Frisina, Robert D.
The CBA/CaJ mouse strain's auditory function is normal during the early phases of life and gradually declines over its lifespan, much like human age-related hearing loss (ARHL), but on a mouse life cycle “time frame”. This pattern of ARHL is relatively similar to that of most humans: difficult to clinically diagnose at its onset, and currently not treatable medically. To address the challenge of early diagnosis, CBA mice were used for the present study to analyze the beginning stages and functional onset biomarkers of ARHL. The results from Auditory Brainstem Response (ABR) audiogram and Gap-in-noise (GIN) ABR tests were compared for two groups of mice of different ages, young adult and middle age. ABR peak components from the middle age group displayed minor changes in audibility, but had a significantly higher prolonged peak latency and decreased peak amplitude in response to temporal gaps in comparison to the young adult group. The results for the younger subjects revealed gap thresholds and recovery rates that were comparable to previous studies of auditory neural gap coding. Our findings suggest that age-linked degeneration of the peripheral and brainstem auditory system is already beginning in middle age, allowing for the possibility of preventative biomedical or hearing protection measures to be implemented as a possibility for attenuating further damage to the auditory system due to ARHL. PMID:25307161
Williamson, Tanika T; Zhu, Xiaoxia; Walton, Joseph P; Frisina, Robert D
The auditory function of the CBA/CaJ mouse strain is normal during the early phases of life and gradually declines over its lifespan, much like human age-related hearing loss (ARHL) but within the "time frame" of a mouse life cycle. This pattern of ARHL is similar to that of most humans: difficult to diagnose clinically at its onset and currently not treatable medically. To address the challenge of early diagnosis, we use CBA mice to analyze the initial stages and functional onset biomarkers of ARHL. The results from Auditory Brainstem Response (ABR) audiogram and Gap-in-noise (GIN) ABR tests were compared for two groups of mice of different ages, namely young adult and middle age. ABR peak components from the middle age group displayed minor changes in audibility but had a significantly higher prolonged peak latency and decreased peak amplitude in response to temporal gaps in comparison with the young adult group. The results for the younger subjects revealed gap thresholds and recovery rates that were comparable with previous studies of auditory neural gap coding. Our findings suggest that age-linked degeneration of the peripheral and brainstem auditory system begins in middle age, allowing for the possibility of preventative biomedical or hearing protection measures to be implemented in order to attenuate further damage to the auditory system attributable to ARHL.
Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G
Cross-modal reorganization following the loss of input from a sensory modality can recruit sensory-deprived cortical areas to process information from the remaining senses. Specifically, in early-deaf cats, the anterior auditory field (AAF) is unresponsive to auditory stimuli but can be activated by somatosensory and visual stimuli. Similarly, AAF neurons respond to tactile input in adult-deafened animals. To examine anatomical changes that may underlie this functional adaptation following early or late deafness, afferent projections to AAF were examined in hearing cats, and cats with early- or adult-onset deafness. Unilateral deposits of biotinylated dextran amine were made in AAF to retrogradely label cortical and thalamic afferents to AAF. In early-deaf cats, ipsilateral neuronal labeling in visual and somatosensory cortices increased by 329% and 101%, respectively. The largest increases arose from the anterior ectosylvian visual area and the anterolateral lateral suprasylvian visual area, as well as somatosensory areas S2 and S4. Consequently, labeling in auditory areas was reduced by 36%. The age of deafness onset appeared to influence afferent connectivity, with less marked differences observed in late-deaf cats. Profound changes to visual and somatosensory afferent connectivity following deafness may reflect corticocortical rewiring affording acoustically deprived AAF with cross-modal functionality.
Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.
Full Text Available Any change in the invariant aspects of the auditory environment is of potential importance. The human brain preattentively or automatically detects such changes. The mismatch negativity (MMN of event-related potentials (ERPs reflects this initial stage of auditory change detection. The origin of MMN is held to be cortical. The hippocampus is associated with a later generated P3a of ERPs reflecting involuntarily attention switches towards auditory changes that are high in magnitude. The evidence for this cortico-hippocampal dichotomy is scarce, however. To shed further light on this issue, auditory cortical and hippocampal-system (CA1, dentate gyrus, subiculum local-field potentials were recorded in urethane-anesthetized rats. A rare tone in duration (deviant was interspersed with a repeated tone (standard. Two standard-to-standard (SSI and standard-to-deviant (SDI intervals (200 ms vs. 500 ms were applied in different combinations to vary the observability of responses resembling MMN (mismatch responses. Mismatch responses were observed at 51.5-89 ms with the 500-ms SSI coupled with the 200-ms SDI but not with the three remaining combinations. Most importantly, the responses appeared in both the auditory-cortical and hippocampal locations. The findings suggest that the hippocampus may play a role in (cortical manifestation of MMN.
Bakker, Mirte J; Tijssen, Marina A J; van der Meer, Johan N; Koelman, Johannes H T M; Boer, Frits
Background: Young patients with anxiety disorders are thought to have a hypersensitive fear system, including alterations of the early sensorimotor processing of threatening information. However, there is equivocal support in auditory blink response studies for an enlarged auditory startle reflex
Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.
Francisco Jose Alvarez
Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.
Hospers, J. Mirjam Boeschen; Smits, Niels; Smits, Cas; Stam, Mariska; Terwee, Caroline B.; Kramer, Sophia E.
Purpose: We reevaluated the psychometric properties of the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1995) using item response theory. Item response theory describes item functioning along an ability continuum. Method: Cross-sectional data from 2,352 adults with and without hearing…
Hospers, J. Mirjam Boeschen; Smits, Niels; Smits, Cas; Stam, Mariska; Terwee, Caroline B.; Kramer, Sophia E.
Purpose: We reevaluated the psychometric properties of the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1995) using item response theory. Item response theory describes item functioning along an ability continuum. Method: Cross-sectional data from 2,352 adults with and without hearing…
Roth, Cullen; Gupta, Cota Navin; Plis, Sergey M; Damaraju, Eswar; Khullar, Siddharth; Calhoun, Vince D; Bridwell, David A
Information must integrate from multiple brain areas in healthy cognition and perception. The present study examined the extent to which cortical responses within one sensory modality are modulated by a complex task conducted within another sensory modality. Electroencephalographic (EEG) responses were measured to a 40 Hz auditory stimulus while individuals attended to modulations in the amplitude of the 40 Hz stimulus, and as a function of the difficulty of the popular computer game Tetris. The steady-state response to the 40 Hz stimulus was isolated by Fourier analysis of the EEG. The response at the stimulus frequency was normalized by the response within the surrounding frequencies, generating the signal-to-noise ratio (SNR). Seven out of eight individuals demonstrate a monotonic increase in the log SNR of the 40 Hz responses going from the difficult visuospatial task to the easy visuospatial task to attending to the auditory stimuli. This pattern is represented statistically by a One-Way ANOVA, indicating significant differences in log SNR across the three tasks. The sensitivity of 40 Hz auditory responses to the visuospatial load was further demonstrated by a significant correlation between log SNR and the difficulty (i.e., speed) of the Tetris task. Thus, the results demonstrate that 40 Hz auditory cortical responses are influenced by an individual's goal-directed attention to the stimulus, and by the degree of difficulty of a complex visuospatial task.
Robert J Zatorre
Full Text Available We tested changes in cortical functional response to auditory configural learning by training ten human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music. We measured covariation in blood oxygenation signal to increasing pitch-interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature of interest. A psychophysical staircase procedure with feedback was used for training over a two-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch-interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch-interval size, such that those who had a higher sensitivity to pitch-interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex specifically to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch-interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.
Ghoneim, M M; Block, R I; Dhanaraj, V J; Todd, M M; Choi, W W; Brown, C K
There is a major distinction between conscious and unconscious learning. Monitoring the mid-latency auditory evoked responses (AER) has been proposed as a measure to ascertain the adequacy of the hypnotic state during surgery. In the present study, we investigated the presence of explicit and implicit memories after anesthesia and examined the relationships of such memories to the AER. We studied 180 patients scheduled for elective surgical procedures. After a thiopental induction, one of four anesthetics were studied: Opioid bolus: 7.5 microg x kg(-1) fentanyl, 70% N2O, with 2.5 microg x kg(-1) supplements as needed (n=100); Opioid infusion: Alfentanil 50 microg x kg(-1) bolus, 1-1.5 microg x kg(-1) x min(-1) infusion, 70% N2O (n=40); Isoflurane 0.3%: Fentanyl 1 microg x kg(-1), 70% N2O, isoflurane 0.3% expired (n=16); Isoflurane 0.7%: Fentanyl 1 microg x kg(-1), 70% N2O, isoflurane 0.7% expired (n=23). AER were recorded before anesthesia, 5 min after surgical incision and then every 30 min until the end of surgery. A tape of either the story of the "Three Little Pigs" or the "Wizard of Oz" was played continuously between the recordings. Explicit memory was assessed postoperatively by tests of recall and recognition, and implicit memory was assessed by the frequency of story-related free associations to target words from the stories, which were solicited twice during a structured interview. Six patients showed explicit recall of intraoperative events: All received the opioid bolus regimen. About 7% of patients reported dreaming during anesthesia. The incidence of picking the correct story that had been presented during anesthesia averaged 49%, i.e., very close to chance level. Overall, priming occurred only at the second association tests for the opioid bolus regimen, for which the frequency of an association to the presented story among those not giving an association to the control story was 26%, which was double the frequency (13%) of an association to the
Full Text Available Detecting sudden environmental changes is crucial for the survival of humans and animals. In the human auditory system the mismatch negativity (MMN, a component of auditory evoked potentials (AEPs, reflects the violation of predictable stimulus regularities, established by the previous auditory sequence. Given the considerable potentiality of the MMN for clinical applications, establishing valid animal models that allow for detailed investigation of its neurophysiological mechanisms is important. Rodent studies, so far almost exclusively under anesthesia, have not provided decisive evidence whether an MMN analogue exists in rats. This may be due to several factors, including the effect of anesthesia. We therefore used epidural recordings in awake black hooded rats, from two auditory cortical areas in both hemispheres, and with bandpass filtered noise stimuli that were optimized in frequency and duration for eliciting MMN in rats. Using a classical oddball paradigm with frequency deviants, we detected mismatch responses at all four electrodes in primary and secondary auditory cortex, with morphological and functional properties similar to those known in humans, i.e., large amplitude biphasic differences that increased in amplitude with decreasing deviant probability. These mismatch responses significantly diminished in a control condition that removed the predictive context while controlling for presentation rate of the deviants. While our present study does not allow for disambiguating precisely the relative contribution of adaptation and prediction error processing to the observed mismatch responses, it demonstrates that MMN-like potentials can be obtained in awake and unrestrained rats.
Romero, Ana Carla Leite; Funayama, Carolina Araújo Rodrigues; Capellini, Simone Aparecida; Frizzo, Ana Claudia Figueiredo
Introduction Behavioral tests of auditory processing have been applied in schools and highlight the association between phonological awareness abilities and auditory processing, confirming that low performance on phonological awareness tests may be due to low performance on auditory processing tests. Objective To characterize the auditory middle latency response and the phonological awareness tests and to investigate correlations between responses in a group of children with learning disorders. Methods The study included 25 students with learning disabilities. Phonological awareness and auditory middle latency response were tested with electrodes placed on the left and right hemispheres. The correlation between the measurements was performed using the Spearman rank correlation coefficient. Results There is some correlation between the tests, especially between the Pa component and syllabic awareness, where moderate negative correlation is observed. Conclusion In this study, when phonological awareness subtests were performed, specifically phonemic awareness, the students showed a low score for the age group, although for the objective examination, prolonged Pa latency in the contralateral via was observed. Negative weak to moderate correlation for Pa wave latency was observed, as was positive weak correlation for Na-Pa amplitude.
Das, Piyali; Bandyopadhyay, Manimay; Ghugare, Balaji W; Ghate, Jayshree; Singh, Ramji
Microcephaly implies a reduced occipito-frontal circumference (Audiometry (BERA) to locate the exact site of lesion resulting in the auditory impairment, so that appropriate early rehabilitative measures can be taken. The study revealed that absolute peak latency of wave V, inter peak latencies of III-V and I-V were significantly higher (P- value < 0.05 in each case) in microcephalics than the normal children. Auditory impairment in microcephaly is a common neurodeficit that can be authentically assessed by BERA. The hearing impairment in microcephalics is mostly due to insufficiency of central components of auditory pathway at the level of brainstem, function of peripheral structures being almost within normal limit.
Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin
Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410
Chabot, Nicole; Butler, Blake E; Lomber, Stephen G
Following sensory deprivation, primary somatosensory and visual cortices undergo crossmodal plasticity, which subserves the remaining modalities. However, controversy remains regarding the neuroplastic potential of primary auditory cortex (A1). To examine this, we identified cortical and thalamic projections to A1 in hearing cats and those with early- and late-onset deafness. Following early deafness, inputs from second auditory cortex (A2) are amplified, whereas the number originating in the dorsal zone (DZ) decreases. In addition, inputs from the dorsal medial geniculate nucleus (dMGN) increase, whereas those from the ventral division (vMGN) are reduced. In late-deaf cats, projections from the anterior auditory field (AAF) are amplified, whereas those from the DZ decrease. Additionally, in a subset of early- and late-deaf cats, area 17 and the lateral posterior nucleus (LP) of the visual thalamus project concurrently to A1. These results demonstrate that patterns of projections to A1 are modified following deafness, with statistically significant changes occurring within the auditory thalamus and some cortical areas. Moreover, we provide anatomical evidence for small-scale crossmodal changes in projections to A1 that differ between early- and late-onset deaf animals, suggesting that potential crossmodal activation of primary auditory cortex differs depending on the age of deafness onset.
Kato, Masaharu; Konishi, Kaoru; Kurosawa, Makiko; Konishi, Yukuo
We measured saccadic response time (SRT) to investigate developmental changes related to spatially aligned or misaligned auditory and visual stimuli responses. We exposed 4-, 5-, and 11-month-old infants to ipsilateral or contralateral auditory-visual stimuli and monitored their eye movements using an electro-oculographic (EOG) system. The SRT analyses revealed four main results. First, saccades were triggered by visual stimuli but not always triggered by auditory stimuli. Second, SRTs became shorter as the children grew older. Third, SRTs for the ipsilateral and visual-only conditions were the same in all infants. Fourth, SRTs for the contralateral condition were longer than for the ipsilateral and visual-only conditions in 11-month-old infants but were the same for all three conditions in 4- and 5-month-old infants. These findings suggest that infants acquire the function of auditory-visual spatial integration underlying saccadic eye movement between the ages of 5 and 11 months. The dependency of SRTs on the spatial configuration of auditory and visual stimuli can be explained by cortical control of the superior colliculus. Our finding of no differences in SRTs between the ipsilateral and visual-only conditions suggests that there are multiple pathways for controlling the superior colliculus and that these pathways have different developmental time courses.
Wilson, Tony W; Hernandez, Olivia O; Asherin, Ryan M; Teale, Peter D; Reite, Martin L; Rojas, Donald C
Neurobiological theories of schizophrenia and related psychoses have increasingly emphasized impaired neuronal coordination (i.e., dysfunctional connectivity) as central to the pathophysiology. Although neuroimaging evidence has mostly corroborated these accounts, the basic mechanism(s) of reduced functional connectivity remains elusive. In this study, we examine the developmental trajectory and underlying mechanism(s) of dysfunctional connectivity by using gamma oscillatory power as an index of local and long-range circuit integrity. An early-onset psychosis group and a matched cohort of typically developing adolescents listened to monaurally presented click-trains, as whole-head magnetoencephalography data were acquired. Consistent with previous work, gamma-band power was significantly higher in right auditory cortices across groups and conditions. However, patients exhibited significantly reduced overall gamma power relative to controls, and showed a reduced ear-of-stimulation effect indicating that ipsi- versus contralateral presentation had less impact on hemispheric power. Gamma-frequency oscillations are thought to be dependent on gamma-aminobutyric acidergic interneuronal networks, thus these patients' impairment in generating and/or maintaining such activity may indicate that local circuit integrity is at least partially compromised early in the disease process. In addition, patients also showed abnormality in long-range networks (i.e., ear-of-stimulation effects) potentially suggesting that multiple stages along auditory pathways contribute to connectivity aberrations found in patients with psychosis.
Griskova-Bulanova, Inga; Ruksenas, Osvaldas; Dapsys, Kastytis;
To explore the modulation of auditory steady-state response (ASSR) by experimental tasks, differing in attentional focus and arousal level.......To explore the modulation of auditory steady-state response (ASSR) by experimental tasks, differing in attentional focus and arousal level....
Tremblay, Kelly L; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory
Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What's more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600-900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast.
Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre
Full Text Available Background and objective: The relationship between EEG source signals and action-related visual and auditory stimulation is still not well understood. The objective of this study was to identify EEG source signals and their associated action-related visual and auditory responses, especially independent components of EEG.Methods: A hand-moving-Hanoi video paradigm was used to study neural correlates of the action-related visual and auditory information processing determined by mu rhythm (8-12 Hz in 16 healthy young subjects. Independent component analysis (ICA was applied to identify separate EEG sources, and further computed in the frequency domain by applying-Fourier transform ICA (F-ICA.Results: F-ICA found more sensory stimuli-related independent components located within the sensorimotor region than ICA did. The total number of independent components of interest from F-ICA was 768, twice that of 384 from traditional time-domain ICA (p0.05.Conclusions: These results support the hypothesis that mu rhythm was sensitive to detection of the cognitive expression, which could be reflected by the function in the parietal lobe sensory-motor region. The results of this study could potentially be applied into early diagnosis for those with visual and hearing impairments in the future.
Full Text Available BACKGROUND: One of the most common symptoms of speech deficits in individuals with Parkinson's disease (PD is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency. METHODOLOGY/PRINCIPAL FINDINGS: Twelve individuals with Parkinson's disease and 13 age- and sex-matched healthy control subjects sustained a vowel sound (/α/ and received unexpected, brief (200 ms perturbations in voice loudness (±3 or 6 dB or pitch (±100 cents auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD. CONCLUSIONS/SIGNIFICANCE: The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing.
Melissa A Tarasenko
Full Text Available Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here we describe an event-related potential (ERP biomarker – the auditory brainstem response to complex sounds (cABR – that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.
Weisz, Nathan; Lecaignard, Françoise; Müller, Nadia; Bertrand, Olivier
Whether attention exerts its impact already on primary sensory levels is still a matter of debate. Particularly in the auditory domain the amount of empirical evidence is scarce. Recently noninvasive and invasive studies have shown attentional modulations of the auditory Steady-State Response (aSSR). This evoked oscillatory brain response is of importance to the issue, because the main generators have been shown to be located in primary auditory cortex. So far, the issue whether the aSSR is sensitive to the predictive value of a cue preceding a target has not been investigated. Participants in the present study had to indicate on which ear the faster amplitude modulated (AM) sound of a compound sound (42 and 19 Hz AM frequencies) was presented. A preceding auditory cue was either informative (75%) or uninformative (50%) with regards to the location of the target. Behaviorally we could confirm that typical attentional modulations of performance were present in case of a preceding informative cue. With regards to the aSSR we found differences between the informative and uninformative condition only when the cue/target combination was presented to the right ear. Source analysis indicated this difference to be generated by a reduced 42 Hz aSSR in right primary auditory cortex. Our and previous data by others show a default tendency of "40 Hz" AM sounds to be processed by the right auditory cortex. We interpret our results as active suppression of this automatic response pattern, when attention needs to be allocated to right ear input. Copyright © 2011 Wiley-Liss, Inc.
Henning U Voss
Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.
Giesbrecht, T.; Merckelbach, H.L.G.J.; Burg, L. ter; Cima, M.; Simeon, D.
The present study examined how acute dissociation, trait-like dissociative symptoms, and physiological reactivity relate to each other. Sixty-nine undergraduate students were exposed to 14 aversive auditory probes, while their skin conductance responses were measured. A combination of self-reported
Roth, Daphne Ari-Even; Muchnik, Chava; Shabtai, Esther; Hildesheimer, Minka; Henkin, Yael
Aim: The aim of this study was to characterize the auditory brainstem responses (ABRs) of young children with suspected autism spectrum disorders (ASDs) and compare them with the ABRs of children with language delay and with clinical norms. Method: The ABRs of 26 children with suspected ASDs (21 males, five females; mean age 32.5 mo) and an age-…
Zaitoun, Maha; Cumming, Steven; Purcell, Alison; O'Brien, Katie
Purpose: This study assesses the impact of patient clinical history on audiologists' performance when interpreting auditory brainstem response (ABR) results. Method: Fourteen audiologists' accuracy in estimating hearing threshold for 16 infants through interpretation of ABR traces was compared on 2 occasions at least 5 months apart. On the 1st…
Recio-Spinoso, A; Temchin, AN; van Dijk, P; Fan, YH; Ruggero, MA
Responses to broadband Gaussian white noise were recorded in auditory-nerve fibers of deeply anesthetized chinchillas and analyzed by computation of zeroth-, first-, and second-order Wiener kernels. The first- order kernels ( similar to reverse correlations or "revcors") of fibers with characteristi
Reilly, Kevin J; Dougherty, Kathleen E
The perturbation of acoustic features in a speaker's auditory feedback elicits rapid compensatory responses that demonstrate the importance of auditory feedback for control of speech output. The current study investigated whether responses to a perturbation of speech auditory feedback vary depending on the importance of the perturbed feature to perception of the vowel being produced. Auditory feedback of speakers' first formant frequency (F1) was shifted upward by 130 mels in randomly selected trials during the speakers' production of consonant-vowel-consonant words containing either the vowel /Λ/ or the vowel /ɝ/. Although these vowels exhibit comparable F1 frequencies, the contribution of F1 to perception of /Λ/ is greater than its contribution to perception of /ɝ/. Compensation to the F1 perturbation was observed during production of both vowels, but compensatory responses during /Λ/ occurred at significantly shorter latencies and exhibited significantly larger magnitudes than compensatory responses during /ɝ/. The finding that perturbation of vowel F1 during /Λ/ and /ɝ/ yielded compensatory differences that mirrored the contributions of F1 to perception of these vowels indicates that some portion of feedback control is weighted toward monitoring and preservation of acoustic cues for speech perception.
Harte, James; Rønne, Filip Munch; Dau, Torsten
(ABR) to transient sounds and frequency following responses (FFR) to tones. The model includes important cochlear processing stages (Zilany and Bruce, 2006) such as basilar-membrane (BM) tuning and compression, inner hair-cell (IHC) transduction, and IHC auditory-nerve (AN) synapse adaptation...
Zapata Rodriguez, Valentina; M. Harte, James; Jeong, Cheol-Ho
-state responses (ASSR), recorded in a sound field is a promising technology to verify the hearing aid fitting. The test involves the presentation of the auditory stimuli via a loudspeaker, unlike the usual procedure of delivering via insert earphones. Room reverberation clearly may significantly affect...
Strelcyk, Olaf; Christoforidis, Dimitrios; Dau, Torsten
Derived-band click-evoked auditory brainstem responses ABRs were obtained for normal-hearing NH and sensorineurally hearing-impaired HI listeners. The latencies extracted from these responses, as a function of derived-band center frequency and click level, served as objective estimates of cochlear...... selectivity in human listeners and offer a window to better understand how hearing impairment affects the spatiotemporal cochlear response pattern....
Ross, Bernhard; Pantev, Christo
Auditory evoked magnetic fields were recorded from the left hemisphere of healthy subjects using a 37-channel magnetometer while stimulating the right ear with 40-Hz amplitude modulated (AM) tone-bursts with 500-Hz carrier frequency in order to study the time-courses of amplitude and phase of auditory steady-state responses (ASSRs). The stimulus duration of 300 ms and the duration of the silent periods (3-300 ms) between succeeding stimuli were chosen to address the question whether the time-course of the ASSR can reflect both temporal integration and temporal resolution in the central auditory processing. Long lasting perturbations of the ASSR were found after gaps in the AM sound, even for gaps of short duration. These were interpreted as evidences for an auditory reset mechanism. Concomitant psycho-acoustical tests corroborated that gap durations perturbing the ASSR were in the same range as the threshold for AM gap detection. Magnetic source localizations estimated the ASSR sources in the primary auditory cortex, suggesting that the processing of temporal structures in the sound is performed at or below the cortical level.
Luciana Macedo de Resende
Full Text Available Aims: To describe the auditory and language outcomes of children with early diagnosis and treatment for congenital toxoplasmosis. Methods: A cross-sectional study included all children diagnosed with congenital toxoplasmosis, through the Minas Gerais State Neonatal Screening Program, from September 2006 to March 2007. All children received early treatment, initiated before the age of 2.5 months, and were periodically assisted by a team of specialists including pediatricians, ophthalmologists and speech-language therapists and audiologists. Hearing function was evaluated with the following procedures: tympanometry, transient evoked otoacoustic emissions, distortion product otoacoustic emissions, behavioral observation audiometry, and brainstem auditory evoked potentials. Hearing function and sensitivity was estimated and audiological results were classified as normal, conductive hearing loss, sensory-neural hearing loss and central dysfunction. Language performance was assessed and classified as normal or abnormal, according to test results. The following variables were studied: audiological results, neurological and ophthalmological conditions, language performance and presence of risk indicator for hearing loss other than congenital toxoplasmosis. Univariate analysis was conducted using the chi-square or Fisher’s Exact test. Results: From September 2006 to March 2007, 106 children were diagnosed with congenital toxoplasmosis through the neonatal screening program, and were included in the study. Data analysis showed normal hearing in 60 children (56.6%, while 13 children (12.3% had conductive hearing loss, four children (3.8% had sensory-neural hearing loss and 29 children (27.4% presented central hearing dysfunction. There was association between hearing problems and language deficits. The comparison between children with additional risks for hearing loss other than toxoplasmosis and children who only presented toxoplasmosis as a risk factor
Full Text Available OBJECTIVE: To assess whether speech therapy can lead to better results for early cochlear implantation (CI children. PATIENTS: A cohort of thirty-four congenitally profoundly deaf children who underwent CI before the age of 18 months at the Sixth Hospital affiliated with Shanghai Jiaotong University from January 2005 to July 2008 were included. Nineteen children received speech therapy in rehabilitation centers (ST, whereas the remaining fifteen cases did not (NST, but were exposed to the real world, as are normal hearing children. METHODS: All children were assessed before surgery and at 6, 12, and 24 months after surgery with the Categories of Auditory Performance test (CAP and the Speech Intelligibility Rating (SIR. Each assessment was given by the same therapist who was blind to the situation of the child at each observation interval. CAP and SIR scores of the groups were compared at each time point. RESULTS: Our study showed that the auditory performance and speech intelligibility of trained children were almost the same as to those of untrained children with early implantation. The CAP and SIR scores of both groups increased with increased time of implant use during the follow-up period, and at each time point, the median scores of the two groups were about equal. CONCLUSIONS: These results indicate that great communication benefits are achieved by early implantation (<18 months without routine speech therapy. The results exemplify the importance of enhanced social environments provided by everyday life experience for human brain development and reassure parents considering cochlear implants where speech training is unavailable.
Leuzzi, V; Cardona, F; Antonozzi, I; Loizzo, A
Pattern reversal visual, auditory, and somatosensorial evoked potentials were recorded in two groups of phenylketonuric (PKU) adolescents after protracted exposition to high concentrations of phenylalanine following diet discontinuation. The first group consisted of 11 early treated (before age 3 months) PKU patients (ET-PKU); the second group consisted of 11 late detected (after age 8 months), symptomatic, PKU subjects (LT-PKU). Despite the relevant lag between the two groups in mental development and neurological status, no clear-cut difference in evoked potentials could be detected. Only the wave I latency of the brainstem auditory evoked potentials (BAEPs) was significantly shorter in ET- versus LT-PKU children. The P100 latency, I-V interpeak latency (IPL), and I-III IPL seem to discriminate the less severe form of PKU (ET-PKU type 3) from the most severe forms, ET-PKU type 1 plus 2 and LT-PKU. No correlations were found between clinical, biochemical, and neurophysiological parameters. The present data suggest that evoked potentials technique is of limited sensitivity in detecting central nervous system (CNS) alterations in PKU adolescents after diet discontinuation.
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.
Juliana Dushanova; Mario Christov
The brain as a system with gradually decreasing resources maximizes its chances by reorganizing neural networks to ensure efficient performance. Auditory event-related potentials were recorded in 28 healthy volunteers comprising 14 young and 14 elderly subjects in auditory discrimination motor task (low frequency tone – right hand movement and high frequency tone – left hand movement). The amplitudes of the sensory event-related potential components (N1, P2) were more pronounced with increasing age for either tone and this effect for P2 amplitude was more pronounced in the frontal region. The latency relationship of N1 between the groups was tone-dependent, while that of P2 was tone-independent with a prominent delay in the elderly group over all brain regions. The amplitudes of the cognitive components (N2, P3) diminished with increasing age and the hemispheric asymmetry of N2 (but not for P3) reduced with increasing age. Prolonged N2 latency with increasing age was widespread for either tone while between-group difference in P3 latency was tone-dependent. High frequency tone stimulation and movement requirements lead to P3 delay in the elderly group. The amplitude difference of the sensory components between the age groups could be due to a general greater alertness, less expressed habituation, or decline in the ability to retreat attentional resources from the stimuli in the elderly group. With aging, a neural circuit reorganization of the brain activity affects the cognitive processes. The approach used in this study is useful for an early discrimination between normal and pathological brain aging for early treatment of cognitive alterations and dementia.
Kristina L McFadden
Full Text Available Auditory evoked steady-state responses are increasingly being used as a marker of brain function and dysfunction in various neuropsychiatric disorders, but research investigating the test-retest reliability of this response is lacking. The purpose of this study was to assess the consistency of the auditory steady-state response (ASSR across sessions. Furthermore, the current study aimed to investigate how the reliability of the ASSR is impacted by stimulus parameters and analysis method employed. The consistency of this response across two sessions spaced approximately 1 week apart was measured in nineteen healthy adults using electroencephalography (EEG. The ASSR was entrained by both 40 Hz amplitude-modulated white noise and click train stimuli. Correlations between sessions were assessed with two separate analytical techniques: a channel-level analysis across the whole-head array and b signal-space projection from auditory dipoles. Overall, the ASSR was significantly correlated between sessions 1 and 2 (p<0.05, multiple comparison corrected, suggesting adequate test-retest reliability of this response. The current study also suggests that measures of inter-trial phase coherence may be more reliable between sessions than measures of evoked power. Results were similar between the two analysis methods, but reliability varied depending on the presented stimulus, with click train stimuli producing more consistent responses than white noise stimuli.
Polat, Zahra; Ataş, Ahmet
In the literature, music education has been shown to enhance auditory perception for children and young adults. When compared to young adult non-musicians, young adult musicians demonstrate increased auditory processing, and enhanced sensitivity to acoustic changes. The evoked response potentials associated with the interpretation of sound are enhanced in musicians. Studies show that training also changes sound perception and cortical responses. The earlier training appears to lead to larger changes in the auditory cortex. Most cortical studies in the literature have used pure tones or musical instrument sounds as stimuli signals. The aim of those studies was to investigate whether musical education would enhance auditory cortical responses when speech signals were used. In this study, the speech sounds extracted from running speech were used as sound stimuli. Non-randomized controlled study. The experimental group consists of young adults up to 21 years-old, all with a minimum of 4 years of musical education. The control group was selected from young adults of the same age without any musical education. The experiments were conducted by using a cortical evoked potential analyser and /m/, /t/ /g/ sound stimulation at the level of 65 dB SPL. In this study, P1 / N1 / P2 amplitude and latency values were measured. Significant differences were found in the amplitude values of P1 and P2 (p0.05). The results obtained in our study indicate that musical experience has an effect on the nervous system and this can be seen in cortical auditory evoked potentials recorded when the subjects hear speech.
Full Text Available It is increasingly appreciated that cochlear pathology is accompanied by adaptive responses in the central auditory system. The cause of cochlear pathology varies widely, and it seems that few commonalities can be drawn. In fact, despite intricate internal neuroplasticity and diverse external symptoms, several classical injury models provide a feasible path to locate responses to different peripheral cochlear lesions. In these cases, hair cell damage may lead to considerable hyperactivity in the central auditory pathways, mediated by a reduction in inhibition, which may underlie some clinical symptoms associated with hearing loss, such as tinnitus. Homeostatic plasticity, the most discussed and acknowledged mechanism in recent years, is most likely responsible for excited central activity following cochlear damage.
Fu, Ying; Chen, Yuan; Xi, Xin; Hong, Mengdi; Chen, Aiting; Wang, Qian; Wong, Lena
To investigate the development of early auditory capability and speech perception in the prelingual deaf children after cochlear implantation, and to study the feasibility of currently available Chinese assessment instruments for the evaluation of early auditory skill and speech perception in hearing-impaired children. A total of 83 children with severe-to-profound prelingual hearing impairment participated in this study. Participants were divided into four groups according to the age for surgery: A (1-2 years), B (2-3 years), C (3-4 years) and D (4-5 years). The auditory skill and speech perception ability of CI children were evaluated by trained audiologists using the infant-toddler/meaningful auditory integration scale (IT-MAIS/MAIS) questionnaire, the Mandarin Early Speech Perception (MESP) test and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The questionnaires were used in face to face interviews with the parents or guardians. Each child was assessed before the operation and 3 months, 6 months, 12 months after switch-on. After cochlear implantation, early postoperative auditory development and speech perception gradually improved. All MAIS/IT-MAIS scores showed a similar increasing trend with the rehabilitation duration (F=5.743, P=0.007). Preoperative and post operative MAIS/IT-MAIS scores of children in age group C (3-4 years) was higher than that of other groups. Children who had longer hearing aid experience before operation demonstrated higher MAIS/IT-MAIS scores than those with little or no hearing aid experience (F=4.947, P=0.000). The MESP test showed that, children were not able to perceive speech as well as detecting speech signals. However as the duration of CI use increased, speech perception ability also improved substantially. However, only about 40% of the subjects could be evaluated using the most difficult subtest on the MPSI in quiet at 12 months after switch-on. As MCR decreased, the proportion of children who could be tested
Clemo, H. Ruth; Lomber, Stephen G.; Meredith, M. Alex
In the cat, the auditory field of the anterior ectosylvian sulcus (FAES) is sensitive to auditory cues and its deactivation leads to orienting deficits toward acoustic, but not visual, stimuli. However, in early deaf cats, FAES activity shifts to the visual modality and its deactivation blocks orienting toward visual stimuli. Thus, as in other auditory cortices, hearing loss leads to cross-modal plasticity in the FAES. However, the synaptic basis for cross-modal plasticity is unknown. Therefo...
Full Text Available Abstract Background Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency. Cortical evoked responses like N1 and mismatch negativity (MMN triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG. We investigated (i how the auditory cortex reacts to pitch difference (in carrier frequency and changes in temporal features (modulation frequency of AM tones and (ii how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex. Results The results showed that, additionally to an improvement of the behavioral discrimination performance, discrimination training of carrier frequency changes significantly modulates the MMN and N1 response amplitudes after the training. This process was accompanied by an attention switch to the deviant stimulus after the training procedure identified by the occurrence of a P3a component. In contrast, the training in discrimination of modulation frequency was not sufficient to improve the behavioral discrimination performance and to alternate the cortical response (MMN to the modulation frequency change. The N1 amplitude, however, showed significant increase after and one week after the training. Similar to the training in carrier frequency discrimination, a long lasting
Bhat, Jyoti; Pitt, Mark A; Shahin, Antoine J
Speech reading enhances auditory perception in noise. One means by which this perceptual facilitation comes about is through information from visual networks reinforcing the encoding of the congruent speech signal by ignoring interfering acoustic signals. We tested this hypothesis neurophysiologically by acquiring EEG while individuals listened to words with a fixed portion of each word replaced by white noise. Congruent (meaningful) or incongruent (reversed frames) mouth movements accompanied the words. Individuals judged whether they heard the words as continuous (illusion) or interrupted (illusion failure) through the noise. We hypothesized that congruent, as opposed to incongruent, mouth movements should further enhance illusory perception by suppressing the auditory cortex's response to interruption onsets and offsets. Indeed, we found that the N1 auditory evoked potential (AEP) to noise onsets and offsets was reduced when individuals experienced the illusion during congruent, but not incongruent, audiovisual streams. This N1 inhibitory effect was most prominent at noise offsets, suggesting that visual influences on auditory perception are instigated to a greater extent during noisy periods. These findings suggest that visual context due to speech-reading disengages (inhibits) neural processes associated with interfering sounds (e.g., noisy interruptions) during speech perception.
Verma, Rohit U; Guex, Amélie A; Hancock, Kenneth E; Durakovic, Nedim; McKay, Colette M; Slama, Michaël C C; Brown, M Christian; Lee, Daniel J
In an effort to improve the auditory brainstem implant, a prosthesis in which user outcomes are modest, we applied electric and infrared neural stimulation (INS) to the cochlear nucleus in a rat animal model. Electric stimulation evoked regions of neural activation in the inferior colliculus and short-latency, multipeaked auditory brainstem responses (ABRs). Pulsed INS, delivered to the surface of the cochlear nucleus via an optical fiber, evoked broad neural activation in the inferior colliculus. Strongest responses were recorded when the fiber was placed at lateral positions on the cochlear nucleus, close to the temporal bone. INS-evoked ABRs were multipeaked but longer in latency than those for electric stimulation; they resembled the responses to acoustic stimulation. After deafening, responses to electric stimulation persisted, whereas those to INS disappeared, consistent with a reported "optophonic" effect, a laser-induced acoustic artifact. Thus, for deaf individuals who use the auditory brainstem implant, INS alone did not appear promising as a new approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhong, Renjia; Qin, Ling; Sato, Yu
Several decades of research have provided evidence that the basal ganglia are closely involved in motor processes. Recent clinical, electrophysiological, behavioral data have revealed that the basal ganglia also receive afferent input from the auditory system, but the detailed auditory response characteristics have not yet reported. The present study aimed to reveal the acoustic response properties of neurons in parts of the basal ganglia. We recorded single-unit activities from the putamen (PU) and globus pallidus (GP) of awake cats passively listening to pure tones, click trains, and natural sounds. Our major findings were: 1) responses in both PU and GP neurons were elicited by pure-tone stimuli, whereas PU neurons had lower intensity thresholds, shorter response latencies, shorter excitatory duration, and larger response magnitudes than GP neurons. 2) Some GP neurons showed a suppressive response lasting throughout the stimulus period. 3) Both PU and GP did not follow periodically repeated click stimuli well, and most neurons only showed a phasic response at the stimulus onset and offset. 4) In response to natural sounds, PU also showed a stronger magnitude and shorter duration of excitatory response than GP. The selectivity for natural sounds was low in both nuclei. 5) Nonbiological environmental sounds more efficiently evoked responses in PU and GP than the vocalizations of conspecifics and other species. Our results provide insights into how acoustic signals are processed in the basal ganglia and revealed the distinction of PU and GP in sensory representation.
Full Text Available This paper aims to provide a review of the emerging Auditory Steady State Response in light of existing procedures for diagnosis of hearing loss in infants. Opsomming Hierdie artikel poog om ‘n oorsig te verskaf van die opkomende Ouditief Standhoudende Respons teenoor huidige prosedures wat gebruik word om gehoorverlies in babas en jong kinders te diagnoseer. *Please note: This is a reduced version of the abstract. Please refer to PDF for full text.
Gutschalk, Alexander; Patterson, Roy D.; Scherg, Michael; Uppenkamp, Stefan; Rupp, André
Recent neuroimaging studies have shown that activity in lateral Heschl’s gyrus covaries specifically with the strength of musical pitch. Pitch strength is important for the perceptual distinctiveness of an acoustic event, but in complex auditory scenes, the distinctiveness of an event also depends on its context. In this magnetoencephalography study, we evaluate how temporal context influences the sustained pitch response (SPR) in lateral Heschl’s gyrus. In 2 sequences of continuously alterna...
Aleman, M; Holliday, T A; Nieto, J E; Williams, D C
Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.
Zhao, Zhenling; Sato, Yu; Qin, Ling
The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association. Copyright © 2015 Elsevier B.V. All rights reserved.
Kuriki, Shinya; Ohta, Keisuke; Koyama, Sachiko
Long-latency auditory-evoked magnetic field and potential show strong attenuation of N1m/N1 responses when an identical stimulus is presented repeatedly due to adaptation of auditory cortical neurons. This adaptation is weak in subsequently occurring P2m/P2 responses, being weaker for piano chords than single piano notes. The adaptation of P2m is more suppressed in musicians having long-term musical training than in nonmusicians, whereas the amplitude of P2 is enhanced preferentially in musicians as the spectral complexity of musical tones increases. To address the key issues of whether such high responsiveness of P2m/P2 responses to complex sounds is intrinsic and common to nonmusical sounds, we conducted a magnetoencephalographic study on participants who had no experience of musical training, using consecutive trains of piano and vowel sounds. The dipole moment of the P2m sources located in the auditory cortex indicated significantly suppressed adaptation in the right hemisphere both to piano and vowel sounds. Thus, the persistent responsiveness of the P2m activity may be inherent, not induced by intensive training, and common to spectrally complex sounds. The right hemisphere dominance of the responsiveness to musical and speech sounds suggests analysis of acoustic features of object sounds to be a significant function of P2m activity.
Koravand, Amineh; Al Osman, Rida; Rivest, Véronique; Poulin, Catherine
The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level. Copyright © 2017 Elsevier B.V. All rights reserved.
Elena V Orekhova
Full Text Available Auditory sensory modulation difficulties are common in autism spectrum disorders (ASD and may stem from a faulty arousal system that compromises the ability to regulate an optimal response. To study neurophysiological correlates of the sensory modulation difficulties, we recorded magnetic field responses to clicks in 14 ASD and 15 typically developing (TD children. We further analyzed the P100m, which is the most prominent component of the auditory magnetic field response in children and may reflect preattentive arousal processes. The P100m was rightward lateralized in the TD, but not in the ASD children, who showed a tendency toward P100m reduction in the right hemisphere (RH. The atypical P100m lateralization in the ASD subjects was associated with greater severity of sensory abnormalities assessed by Short Sensory Profile, as well as with auditory hypersensitivity during the first two years of life. The absence of right-hemispheric predominance of the P100m and a tendency for its right-hemispheric reduction in the ASD children suggests disturbance of the RH ascending reticular brainstem pathways and/or their thalamic and cortical projections, which in turn may contribute to abnormal arousal and attention. The correlation of sensory abnormalities with atypical, more leftward, P100m lateralization suggests that reduced preattentive processing in the right hemisphere and/or its shift to the left hemisphere may contribute to abnormal sensory behavior in ASD.
Bhattacharya, J; Bennett, M J; Tucker, S M
The auditory response cradle is being used in a mass hearing screening project. Babies are assessed in the first week after birth by the fully automatic, microprocessor controlled cradle. The test, lasting from two to 10 minutes, compares physiological auditory responses to natural behaviour measured in control trials. More than 5000 babies have been tested and full follow up information at the age of 7 to 9 months is available from over two thirds of these. Less detailed information is available for 71% and 64% of those babies who have been followed up at 18 months and three years of age respectively. A total of 439 of 5553 neonates tested failed the first screening test. Eighty eight (1 X 6%) failed a second screening test while still in the maternity unit but 61 of these were subsequently shown to be normal, giving a false positive rate of 1 X 1%. The babies who failed the screening tests included 9 with sensorineural hearing loss, three with secretory otitis media, and three with abnormal auditory brain stem response tests. One child who passed the initial screening tests was found to have a moderately severe hearing loss at the age of 18 months. Images Figure PMID:6540071
Conclusions: The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development. Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children. The development of auditory and speech in hearing impaired children may be relatively crucial in thefirst year's habilitation after fitted with the auxiliary device.
Hecox, K.; Galambos, R.
Brain stem evoked potentials were recorded by conventional scalp electrodes in infants (3 weeks to 3 years of age) and adults. The latency of one of the major response components (wave V) is shown to be a function both of click intensity and the age of the subject; this latency at a given signal strength shortens postnatally to reach the adult value (about 6 msec) by 12 to 18 months of age. The demonstrated reliability and limited variability of these brain stem electrophysiological responses provide the basis for an optimistic estimate of their usefulness as an objective method for assessing hearing in infants and adults.
Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari
The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other.
Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari
The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929
Background Few studies have reported a correlation between auditory brainstem response (ABR) findings and nerve conduction studies (NCSs). The correlation between ABR findings and the metabolic profile of these patients is not well documented in previous studies. The present study was designed to investigate the impact of the disturbed metabolic profile (hyperglyceridemia and hyperlipidemia) in diabetic patients on the peripheral nervous system as well as the auditory brainstem response. ...
Van Meir, Vincent; Boumans, Tiny; De Groof, Geert; Van Audekerke, Johan; Smolders, Alain; Scheunders, Paul; Sijbers, Jan; Verhoye, Marleen; Balthazart, Jacques; Van der Linden, Annemie
Auditory fMRI in humans has recently received increasing attention from cognitive neuroscientists as a tool to understand mental processing of learned acoustic sequences and analyzing speech recognition and development of musical skills. The present study introduces this tool in a well-documented animal model for vocal learning, the songbird, and provides fundamental insight in the main technical issues associated with auditory fMRI in these songbirds. Stimulation protocols with various listening tasks lead to appropriate activation of successive relays in the songbirds' auditory pathway. The elicited BOLD response is also region and stimulus specific, and its temporal aspects provide accurate measures of the changes in brain physiology induced by the acoustic stimuli. Extensive repetition of an identical stimulus does not lead to habituation of the response in the primary or secondary telencephalic auditory regions of anesthetized subjects. The BOLD signal intensity changes during a stimulation and subsequent rest period have a very specific time course which shows a remarkable resemblance to auditory evoked BOLD responses commonly observed in human subjects. This observation indicates that auditory fMRI in the songbird may establish a link between auditory related neuro-imaging studies done in humans and the large body of neuro-ethological research on song learning and neuro-plasticity performed in songbirds.
Miller, Charles A; Hu, Ning; Zhang, Fawen; Robinson, Barbara K; Abbas, Paul J
Most auditory prostheses use modulated electric pulse trains to excite the auditory nerve. There are, however, scant data regarding the effects of pulse trains on auditory nerve fiber (ANF) responses across the duration of such stimuli. We examined how temporal ANF properties changed with level and pulse rate across 300-ms pulse trains. Four measures were examined: (1) first-spike latency, (2) interspike interval (ISI), (3) vector strength (VS), and (4) Fano factor (FF, an index of the temporal variability of responsiveness). Data were obtained using 250-, 1,000-, and 5,000-pulse/s stimuli. First-spike latency decreased with increasing spike rate, with relatively small decrements observed for 5,000-pulse/s trains, presumably reflecting integration. ISIs to low-rate (250 pulse/s) trains were strongly locked to the stimuli, whereas ISIs evoked with 5,000-pulse/s trains were dominated by refractory and adaptation effects. Across time, VS decreased for low-rate trains but not for 5,000-pulse/s stimuli. At relatively high spike rates (>200 spike/s), VS values for 5,000-pulse/s trains were lower than those obtained with 250-pulse/s stimuli (even after accounting for the smaller periods of the 5,000-pulse/s stimuli), indicating a desynchronizing effect of high-rate stimuli. FF measures also indicated a desynchronizing effect of high-rate trains. Across a wide range of response rates, FF underwent relatively fast increases (i.e., within 100 ms) for 5,000-pulse/s stimuli. With a few exceptions, ISI, VS, and FF measures approached asymptotic values within the 300-ms duration of the low- and high-rate trains. These findings may have implications for designs of cochlear implant stimulus protocols, understanding electrically evoked compound action potentials, and interpretation of neural measures obtained at central nuclei, which depend on understanding the output of the auditory nerve.
Kauramäki, Jaakko; Jääskeläinen, Iiro P; Hari, Riitta; Möttönen, Riikka; Rauschecker, Josef P; Sams, Mikko
Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.
Gadziola, Marie A.
The underlying goal of this dissertation is to understand how the amygdala, a brain region involved in establishing the emotional significance of sensory input, contributes to the processing of complex sounds. The general hypothesis is that communication calls of big brown bats (Eptesicus fuscus) transmit relevant information about social context that is reflected in the activity of amygdalar neurons. The first specific aim analyzed social vocalizations emitted under a variety of behavioral contexts, and related vocalizations to an objective measure of internal physiological state by monitoring the heart rate of vocalizing bats. These experiments revealed a complex acoustic communication system among big brown bats in which acoustic cues and call structure signal the emotional state of a sender. The second specific aim characterized the responsiveness of single neurons in the basolateral amygdala to a range of social syllables. Neurons typically respond to the majority of tested syllables, but effectively discriminate among vocalizations by varying the response duration. This novel coding strategy underscores the importance of persistent firing in the general functioning of the amygdala. The third specific aim examined the influence of acoustic context by characterizing both the behavioral and neurophysiological responses to natural vocal sequences. Vocal sequences differentially modify the internal affective state of a listening bat, with lower aggression vocalizations evoking the greatest change in heart rate. Amygdalar neurons employ two different coding strategies: low background neurons respond selectively to very few stimuli, whereas high background neurons respond broadly to stimuli but demonstrate variation in response magnitude and timing. Neurons appear to discriminate the valence of stimuli, with aggression sequences evoking robust population-level responses across all sound levels. Further, vocal sequences show improved discrimination among stimuli
Santos, Mariline; Marques, Cristina; Nóbrega Pinto, Ana; Fernandes, Raquel; Coutinho, Miguel Bebiano; Almeida E Sousa, Cecília
To determine whether children with autism spectrum disorders (ASDs) have an increased number of wave I abnormal amplitudes in auditory brainstem responses (ABRs) than age- and sex-matched typically developing children. This analytical case-control study compared patients with ASDs between the ages of 2 and 6 years and children who had a language delay not associated with any other pathology. Amplitudes of ABR waves I and V; absolute latencies (ALs) of waves I, III, and V; and interpeak latencies (IPLs) I-III, III-IV, and I-V at 90 dB were compared between ASD patients and normally developing children. The study enrolled 40 children with documented ASDs and 40 age- and sex-matched control subjects. Analyses of the ABR showed that children with ASDs exhibited higher amplitudes of wave 1 than wave V (35%) more frequently than the control group (10%), and this difference between groups reached statistical significance by Chi-squared analysis. There were no significant differences in ALs and IPLs between ASD children and matched controls. To the best of our knowledge, this is the first case-control study testing the amplitudes of ABR wave I in ASD children. The reported results suggest a potential for the use of ABR recordings in children, not only for the clinical assessment of hearing status, but also for the possibility of using amplitude of ABR wave I as an early marker of ASDs allowing earlier diagnosis and intervention. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Jørgensen, M B; Christensen-Dalsgaard, J
We studied the directionality of spike rate responses of auditory nerve fibers of the grassfrog, Rana temporaria, to pure tone stimuli. All auditory fibers showed spike rate directionality. The strongest directionality was seen at low frequencies (200-400 Hz), where the spike rate could change by...
Sabri, Merav; Humphries, Colin; Verber, Matthew; Mangalathu, Jain; Desai, Anjali; Binder, Jeffrey R; Liebenthal, Einat
In the visual modality, perceptual demand on a goal-directed task has been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality, the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous ERPs and fMRI responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in an ROI in middle left superior temporal gyrus and in negative ERP activity 130-230 msec post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.
Núñez-Batalla, Faustino; Noriega-Iglesias, Sabel; Guntín-García, Maite; Carro-Fernández, Pilar; Llorente-Pendás, José Luis
Conventional audiometry is the gold standard for quantifying and describing hearing loss. Alternative methods become necessary to assess subjects who are too young to respond reliably. Auditory evoked potentials constitute the most widely used method for determining hearing thresholds objectively; however, this stimulus is not frequency specific. The advent of the auditory steady-state response (ASSR) leads to more specific threshold determination. The current study describes and compares ASSR, auditory brainstem response (ABR) and conventional behavioural tone audiometry thresholds in a group of infants with various degrees of hearing loss. A comparison was made between ASSR, ABR and behavioural hearing thresholds in 35 infants detected in the neonatal hearing screening program. Mean difference scores (±SD) between ABR and high frequency ABR thresholds were 11.2 dB (±13) and 10.2 dB (±11). Pearson correlations between the ASSR and audiometry thresholds were 0.80 and 0.91 (500Hz); 0.84 and 0.82 (1000Hz); 0.85 and 0.84 (2000Hz); and 0.83 and 0.82 (4000Hz). The ASSR technique is a valuable extension of the clinical test battery for hearing-impaired children. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Full Text Available Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI. Faces expressing emotions (sad/happy/neutral were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing.
Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich
The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... mink only showed habituation in experiment 2. Regardless of the frequency used (2 and 18 kHz), cues predicting the danger situation initially elicited slower responses compared to those predicting the safe situation but quickly became faster. Using auditory cues as discrimination stimuli for female...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....
Full Text Available Introduction: Preterm birth is a risk factor for a number of conditions that requires comprehensive examination. Our study was designed to investigate the impact of preterm birth on the processing of auditory stimuli and brain structures at the brainstem level at a preschool age. Materials and Methods: An auditory brainstem response (ABR test was performed with low rates of stimuli in 60 children aged 4 to 6 years. Thirty subjects had been born following a very preterm labor or late-preterm labor and 30 control subjects had been born following a full-term labor. Results: Significant differences in the ABR test result were observed in terms of the inter-peak intervals of the I–III and III–V waves, and the absolute latency of the III wave (P
Moody, David B.; Stebbins, William C.; Iglauer, Carol
Two monkeys were trained to press and hold a response key in the presence of a light and to release it at the onset of a pure tone. Initially, all responses with latencies shorter than 1 sec were reinforced without regard to the frequency of the pure tone, and the intensity of the pure tone that resulted in equal latencies at each frequency was determined. The second stage of the experiment consisted of discrimination training, during which releases to one pure-tone frequency (positive stimulus) were reinforced and releases to a second frequency (negative stimulus) were extinguished. Median latencies to the negative stimulus slowly increased as did the variability of the latency distribution for the negative stimulus. There was no evidence of a concurrent decrease in latencies to the positive stimulus indicative of behavioral contrast. The third part of the experiment consisted of determining maintained generalization gradients by increasing the number of nonreinforcement stimuli. The gradients that eventually resulted showed approximately equal latencies to all frequencies of the negative stimulus and shorter latencies to the positive stimulus frequency. PMID:5003971
Full Text Available Auditory cortical plasticity can be induced through various approaches. The medial geniculate body (MGB of the auditory thalamus gates the ascending auditory inputs to the cortex. The thalamocortical system has been proposed to play a critical role in the responses of the auditory cortex (AC. In the present study, we investigated the cellular mechanism of the cortical activity, adopting an in vivo intracellular recording technique, recording from the primary auditory cortex (AI while presenting an acoustic stimulus to the rat and electrically stimulating its MGB. We found that low-frequency stimuli enhanced the amplitudes of sound-evoked excitatory postsynaptic potentials (EPSPs in AI neurons, whereas high-frequency stimuli depressed these auditory responses. The degree of this modulation depended on the intensities of the train stimuli as well as the intervals between the electrical stimulations and their paired sound stimulations. These findings may have implications regarding the basic mechanisms of MGB activation of auditory cortical plasticity and cortical signal processing.
Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.
Beutelmann, Rainer; Laumen, Geneviève; Tollin, Daniel; Klump, Georg M
Although auditory brainstem responses (ABRs), the sound-evoked brain activity in response to transient sounds, are routinely measured in humans and animals there are often differences in ABR waveform morphology across studies. One possible reason may be the method of stimulus calibration. To explore this hypothesis, click-evoked ABRs were measured from seven ears in four Mongolian gerbils (Meriones unguiculatus) using three common spectrum calibration strategies: Minimum phase filter, linear phase filter, and no filter. The results show significantly higher ABR amplitude and signal-to-noise ratio, and better waveform resolution with the minimum phase filtered click than with the other strategies.
J. Christopher eEdgar
Full Text Available Background: The development of left and right superior temporal gyrus (STG 50ms (M50 and 100ms (M100 auditory responses in typically developing children (TD and in children with autism spectrum disorder (ASD was examined. It was hypothesized that (1 M50 responses would be observed equally often in younger and older children, (2 M100 responses would be observed more often in older than younger children indicating later development of secondary auditory areas, and (3 M100 but not M50 would be observed less often in ASD than TD in both age groups, reflecting slower maturation of later developing auditory areas in ASD. Methods: 35 typically developing controls, 63 ASD without language impairment (ASD-LI, and 38 ASD with language impairment (ASD+LI were recruited.The presence or absence of a STG M50 and M100 was scored. Subjects were grouped into younger (6 to 10-years-old and older groups (11 to 15-years-old. Results: Although M50 responses were observed equally often in older and younger subjects and equally often in TD and ASD, left and right M50 responses were delayed in ASD-LI and ASD+LI. Group comparisons showed that in younger subjects M100 responses were observed more often in TD than ASD+LI (90% vs 66%, p=0.04, with no differences between TD and ASD-LI (90% vs 76% p=0.14 or between ASD-LI and ASD+LI (76% vs 66%, p=0.53. In older subjects, whereas no differences were observed between TD and ASD+LI, responses were observed more often in ASD-LI than ASD+LI. Conclusions: Although present in all groups, M50 responses were delayed in ASD, suggesting delayed development of earlier developing auditory areas. Examining the TD data, findings indicated that by 11 years a right M100 should be observed in 100% of subjects and a left M100 in 80% of subjects. Thus, by 11years, lack of a left and especially right M100 offers neurobiological insight into sensory processing that may underlie language or cognitive impairment.
Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.
Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L
Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production.
Full Text Available A cochlear implant (CI is an auditory prosthesis that enables hearing by providing electrical stimuli through an electrode array. It has been previously established that the electrode position can influence CI performance. Thus, electrode position should be considered in order to achieve better CI results. This paper describes how the electrode position influences the auditory nerve fiber (ANF response to either a single pulse or low- (250 pulses/s and high-rate (5,000 pulses/s pulse-trains using a computational model. The field potential in the cochlea was calculated using a three-dimensional finite-element model, and the ANF response was simulated using a biophysical ANF model. The effects were evaluated in terms of the dynamic range, stochasticity, and spike excitation pattern. The relative spread, threshold, jitter, and initiated node were analyzed for single-pulse response; and the dynamic range, threshold, initiated node, and interspike interval were analyzed for pulse-train stimuli responses. Electrode position was found to significantly affect the spatiotemporal pattern of the ANF response, and this effect was significantly dependent on the stimulus rate. We believe that these modeling results can provide guidance regarding perimodiolar and lateral insertion of CIs in clinical settings and help understand CI performance.
Ghaemi, Reza; Rezai, Pouya; Iyengar, Balaji G; Selvaganapathy, Ponnambalam Ravi
Two microfluidic devices (pneumatic chip and FlexiChip) have been developed for immobilization and live-intact fluorescence functional imaging of Drosophila larva's Central Nervous System (CNS) in response to controlled acoustic stimulation. The pneumatic chip is suited for automated loading/unloading and potentially allows high throughput operation for studies with a large number of larvae while the FlexiChip provides a simple and quick manual option for animal loading and is suited for smaller studies. Both chips were capable of significantly reducing the endogenous CNS movement while still allowing the study of sound-stimulated CNS activities of Drosophila 3rd instar larvae using genetically encoded calcium indicator GCaMP5. Temporal effects of sound frequency (50-5000 Hz) and intensity (95-115 dB) on CNS activities were investigated and a peak neuronal response of 200 Hz was identified. Our lab-on-chip devices can not only aid further studies of Drosophila larva's auditory responses but can be also adopted for functional imaging of CNS activities in response to other sensory cues. Auditory stimuli and the corresponding response of the CNS can potentially be used as a tool to study the effect of chemicals on the neurophysiology of this model organism.
Lasky, R E
Auditory evoked brainstem response (ABR) latencies increased and amplitudes decreased with increasing stimulus repetition rate for human newborns and adults. The wave V latency increases were larger for newborns than adults. The wave V amplitude decreases were smaller for newborns than adults. These differences could not be explained by developmental differences in frequency responsivity. The transition from the unadapted to the fully adapted response was less rapid in newborns than adults at short (= 10 ms) inter stimulus intervals (ISIs). At longer ISIs (= 20 ms) there were no developmental differences in the transition to the fully adapted response. The newborn transition occurred in a two stage process. The rapid initial stage observed in adults and newborns was complete by about 40 ms. A second slower stage was observed only in newborns although it has been observed in adults in other studies (Weatherby and Hecox, 1982; Lightfoot, 1991; Lasky et al., 1996). These effects were replicated at different stimulus intensities. After the termination of stimulation the return to the wave V unadapted response took nearly 500 ms in newborns. Neither the newborn nor the adult data can be explained by forward masking of one click on the next click. These results indicate human developmental differences in adaptation to repetitive auditory stimulation at the level of the brainstem.
Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.
Full Text Available INTRODUCTION: This study aimed to assess the top-down control of sound processing in the auditory brainstem of rats. Short latency evoked responses were analyzed after unilateral or bilateral ablation of auditory cortex. This experimental paradigm was also used towards analyzing the long-term evolution of post-lesion plasticity in the auditory system and its ability to self-repair. METHOD: Auditory cortex lesions were performed in rats by stereotactically guided fine-needle aspiration of the cerebrocortical surface. Auditory Brainstem Responses (ABR were recorded at post-surgery day (PSD 1, 7, 15 and 30. Recordings were performed under closed-field conditions, using click trains at different sound intensity levels, followed by statistical analysis of threshold values and ABR amplitude and latency variables. Subsequently, brains were sectioned and immunostained for GAD and parvalbumin to assess the location and extent of lesions accurately. RESULTS: Alterations in ABR variables depended on the type of lesion and post-surgery time of ABR recordings. Accordingly, bilateral ablations caused a statistically significant increase in thresholds at PSD1 and 7 and a decrease in waves amplitudes at PSD1 that recover at PSD7. No effects on latency were noted at PSD1 and 7, whilst recordings at PSD15 and 30 showed statistically significant decreases in latency. Conversely, unilateral ablations had no effect on auditory thresholds or latencies, while wave amplitudes only decreased at PSD1 strictly in the ipsilateral ear. CONCLUSION: Post-lesion plasticity in the auditory system acts in two time periods: short-term period of decreased sound sensitivity (until PSD7, most likely resulting from axonal degeneration; and a long-term period (up to PSD7, with changes in latency responses and recovery of thresholds and amplitudes values. The cerebral cortex may have a net positive gain on the auditory pathway response to sound.
Full Text Available Attending and responding to sound location generates increased activity in parietal cortex which may index auditory spatial working memory and/or goal-directed action. Here, we used an n-back task (Experiment 1 and an adaptation paradigm (Experiment 2 to distinguish memory-related activity from that associated with goal-directed action. In Experiment 1, participants indicated, in separate blocks of trials, whether the incoming stimulus was presented at the same location as in the previous trial (1-back or two trials ago (2-back. Prior to a block of trials, participants were told to use their left or right index finger. Accuracy and reaction times were worse for the 2-back than for the 1-back condition. The analysis of fMRI data revealed greater sustained task-related activity in the inferior parietal lobule (IPL and superior frontal sulcus during 2-back than 1-back after accounting for response-related activity elicited by the targets. Target detection and response execution were also associated with enhanced activity in the IPL bilaterally, though the activation was anterior to that associated with sustained task-related activity. In Experiment 2, we used an event-related design in which participants listened (no response required to trials that comprised four sounds presented either at the same location or at four different locations. We found larger IPL activation for changes in sound location than for sounds presented at the same location. The IPL activation overlapped with that observed during auditory spatial working memory task. Together, these results provide converging evidence supporting the role of parietal cortex in auditory spatial working memory which can be dissociated from response selection and execution.
Vercillo, Tiziana; Burr, David; Gori, Monica
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record
Newton, Elizabeth; Landau, Sabine; Smith, Patrick; Monks, Paul; Shergill, Sukhi; Wykes, Til
Twenty to fifty percent of people with a diagnosis of schizophrenia continue to hear voices despite taking neuroleptic medication. Trials of group cognitive behavioral therapy for adults with auditory hallucinations have shown promising results. Auditory hallucinations may be most amenable to psychological intervention during a 3-year critical period after symptom onset. This study evaluates the effectiveness of group cognitive behavioral therapy (CBT) for young people with recent-onset auditory hallucinations (N = 22), using a waiting list control. Outcome measures were administered at four separate time points. Significant reductions in auditory hallucinations occurred over the total treatment phase, but not over the waiting period. Further investigations in the form of randomized controlled trials are warranted.
Zeena Venkatacheluvaiah Pushpalatha
Full Text Available Introduction: Encoding of CE-chirp and click stimuli in auditory system was studied using auditory brainstem responses (ABRs among individuals with and without noise exposure. Materials and Methods: The study consisted of two groups. Group 1 (experimental group consisted of 20 (40 ears individuals exposed to occupational noise with hearing thresholds within 25 dB HL. They were further divided into three subgroups based on duration of noise exposure (0–5 years of exposure-T1, 5–10 years of exposure-T2, and >10 years of exposure-T3. Group 2 (control group consisted of 20 individuals (40 ears. Absolute latency and amplitude of waves I, III, and V were compared between the two groups for both click and CE-chirp stimuli. T1, T2, and T3 groups were compared for the same parameters to see the effect of noise exposure duration on CE-chirp and click ABR. Result: In Click ABR, while both the parameters for wave III were significantly poorer for the experimental group, wave V showed a significant decline in terms of amplitude only. There was no significant difference obtained for any of the parameters for wave I. In CE-Chirp ABR, the latencies for all three waves were significantly prolonged in the experimental group. However, there was a significant decrease in terms of amplitude in only wave V for the same group. Discussion: Compared to click evoked ABR, CE-Chirp ABR was found to be more sensitive in comparison of latency parameters in individuals with occupational noise exposure. Monitoring of early pathological changes at the brainstem level can be studied effectively by using CE-Chirp stimulus in comparison to click stimulus. Conclusion: This study indicates that ABR’s obtained with CE-chirp stimuli serves as an effective tool to identify the early pathological changes due to occupational noise exposure when compared to click evoked ABR.
Otsuru, Naofumi; Tsuruhara, Aki; Motomura, Eishi; Tanii, Hisashi; Nishihara, Makoto; Inui, Koji; Kakigi, Ryusuke
Nicotine is known to have enhancing effects on some aspects of attention and cognition. The purpose of the present study was to elucidate the effects of nicotine on pre-attentive change-related cortical activity. Change-related cortical activity in response to an abrupt increase (3 dB) and decrease (6 dB) in sound pressure in a continuous sound was recorded by using magnetoencephalography. Nicotine was administered with a nicotine gum (4 mg of nicotine). Eleven healthy nonsmokers were tested with a double-blind and placebo-controlled design. Effects of nicotine on the main component of the onset response peaking at around 50 ms (P50m) and the main component of the change-related response at around 120 ms (Change-N1m) were investigated. Nicotine failed to affect P50m, while it significantly increased the amplitude of Change-N1m evoked by both auditory changes. The magnitude of the amplitude increase was similar among subjects regardless of the magnitude of the baseline response, which resulted in the percent increase of Change-N1m being greater for subjects with Change-N1m of smaller amplitude. Since Change-N1m represents a pre-attentive automatic process to encode new auditory events, the present results suggest that nicotine can exert beneficial cognitive effects without a direct impact on attention.
Roebuck, Hettie; Guo, Kun; Bourke, Patrick
Why attention lapses during prolonged tasks is debated, specifically whether errors are a consequence of under-arousal or exerted effort. To explore this, we investigated whether increased impulsivity is associated with effortful processing by modifying the demand of a task by presenting it at a quiet intensity. Here, we consider whether attending at low but detectable levels affects impulsivity in a population with intact hearing. A modification of the Sustained Attention to Response Task was used with auditory stimuli at two levels: the participants' personal "lowest detectable" level and a "normal speaking" level. At the quiet intensity, we found that more impulsive responses were made compared with listening at a normal speaking level. These errors were not due to a failure in discrimination. The findings suggest an increase in processing time for auditory stimuli at low levels that exceeds the time needed to interrupt a planned habitual motor response. This leads to a more impulsive and erroneous response style. These findings have important implications for understanding the nature of impulsivity in relation to effortful processing. They may explain why a high proportion of individuals with hearing loss are also diagnosed with Attention Deficit Hyperactivity Disorder.
Tao, Can; Zhang, Guangwei; Zhou, Chang; Wang, Lijuan; Yan, Sumei; Zhang, Li I; Zhou, Yi; Xiong, Ying
Cortical neurons can exhibit significant variation in their responses to the same sensory stimuli, as reflected by the reliability and temporal precision of spikes. However the synaptic mechanism underlying response variation still remains unclear. Here, in vivo whole-cell patch-clamp recording of excitatory neurons revealed variation in the amplitudes as well as the temporal profiles of excitatory and inhibitory synaptic inputs evoked by the same sound stimuli in layer 4 of the rat primary auditory cortex. Synaptic inputs were reliably induced by repetitive stimulation, although with large variation in amplitude. The variation in the amplitude of excitation was much higher than that of inhibition. In addition, the temporal jitter of the synaptic onset latency was much smaller than the jitter of spike response. We further demonstrated that the amplitude variation of excitatory inputs can largely account for the spike variation, while the jitter in spike timing can be primarily attributed to the temporal variation of excitatory inputs. Furthermore, the spike reliability of excitatory but not inhibitory neurons is dependent on tone frequency. Our results thus revealed an inherent cortical synaptic contribution for the generation of variation in the spike responses of auditory cortical neurons.
Full Text Available Objectives: The amplitude of the auditory steady-state response (ASSR is enhanced in tinnitus. As ASSR ampli¬tude is also enhanced by attention, the effect of tinnitus on ASSR amplitude could be interpreted as an effect of attention mediated by tinnitus. As attention effects on the N1 are signi¬fi¬cantly larger than those on the ASSR, if the effect of tinnitus on ASSR amplitude were due to attention, there should be similar amplitude enhancement effects in tinnitus for the N1 component of the auditory evoked response. Methods: MEG recordings of auditory evoked responses which were previously examined for the ASSR (Diesch et al. 2010 were analysed with respect to the N1m component. Like the ASSR previously, the N1m was analysed in the source domain (source space projection. Stimuli were amplitude-modulated tones with one of three carrier fre¬quen¬cies matching the tinnitus frequency or a surrogate frequency 1½ octaves above the audio¬metric edge frequency in con¬trols, the audiometric edge frequency, and a frequency below the audio¬metric edgeResults: In the earlier ASSR study (Diesch et al., 2010, the ASSR amplitude in tinnitus patients, but not in controls, was significantly larger in the (surrogate tinnitus condition than in the edge condition. In the present study, both tinnitus patients and healthy controls show an N1m-amplitude profile identical to the one of ASSR amplitudes in healthy controls. N1m amplitudes elicited by tonal frequencies located at the audiometric edge and at the (surrogate tinnitus frequency are smaller than N1m amplitudes elicited by sub-edge tones and do not differ among each other.Conclusions: There is no N1-amplitude enhancement effect in tinnitus. The enhancement effect of tinnitus on ASSR amplitude cannot be accounted for in terms of attention induced by tinnitus.
Hooks, R G; Weber, B A
The feasibility of bone conduction auditory brain stem response (ABR) audiometry in intensive care nursery neonates was investigated. Forty premature infants were tested with both air- and bone-conducted stimuli. Bone-conducted stimuli resulted in more identifiable ABRs and a greater number of subjects passing the hearing screening. The findings of this study suggest that bone conduction ABR audiometry is a feasible technique with premature infants. Due to the lower frequency composition of the bone-conducted click, it may be more effective than an air-conducted click when the immature cochlea is being evaluated.
Kryuchkova, Tatiana; Tucker, Benjamin V.; Wurm, Lee H.; Baayen, R. Harald
Visual emotionally charged stimuli have been shown to elicit early electrophysiological responses (e.g., Ihssen, Heim, & Keil, 2007; Schupp, Junghofer, Weike, & Hamm, 2003; Stolarova, Keil, & Moratti, 2006). We presented isolated words to listeners, and observed, using generalized additive modeling, oscillations in the upper part of the delta…
Kiyokawa, Yasushi; Takeuchi, Yukari
Social buffering is a phenomenon in which stress in an animal is ameliorated when the subject is accompanied by a conspecific animal(s) during exposure to distressing stimuli. Previous studies of social buffering of conditioned fear responses in rats have typically used a 3-s auditory conditioned stimulus (CS) as a stressor, observing stress responses during a specified experimental period. Because a 3-s CS is extremely short compared with a typical experimental period, freezing has thus been observed primarily in the absence of the CS. Therefore, it has been unclear whether social buffering ameliorates conditioned fear responses in the presence of the CS. To clarify this issue, the current study assessed the effects of social buffering on conditioned fear responses in the presence of a 20-s CS. We measured the percentage of time spent freezing during the 20-s period following the onset of the CS. When conditioned subjects were exposed to the 20-s CS alone, they exhibited a high percentage of freezing in the presence of the CS. The presence of another non-conditioned rat completely blocked this response. The same result was observed when freezing was observed primarily in the absence of the 3-s CS. In addition, we confirmed that the presence of an associate ameliorated conditioned fear responses induced by a 20-s CS or 3-s CS when the duration and frequency of fear responses was measured. These findings indicate that social buffering ameliorates conditioned fear responses in the presence of an auditory CS. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Daniela Polo C. Silva
Full Text Available OBJECTIVE: To report an infant with congenital cytomegalovirus and progressive sensorineural hearing loss, who was assessed by three methods of hearing evaluation. CASE DESCRIPTION: In the first audiometry, at four months of age, the infant showed abnormal response in Otoacoustic Emissions and normal Auditory Brainstem Response (ABR, with electrophysiological threshold in 30dBnHL, in both ears. With six months of age, he showed bilateral absence of the ABR at 100dBnHL. The behavioral observational audiometry was impaired due to the delay in neuropsychomotor development. At eight months of age, he was submitted to Auditory Steady State Response (ASSR and the thresholds were 50, 70, absent in 110 and in 100dB, respectively for 500, 1,000, 2,000 and 4,000Hz in the right ear, and 70, 90, 90 and absent in 100dB, respectively for 500, 1,000, 2,000 and 4,000Hz in the left ear. COMMENTS: In the first evaluation, the infant had abnormal Otoacoustic Emission and normal ABR, which became altered at six months of age. The hearing loss severity could be identified only by the ASSR, which allowed the best procedure for hearing aids adaptation. The case description highlights the importance of the hearing status follow-up for children with congenital cytomegalovirus.
Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep
The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.
Full Text Available Generation of the auditory steady state responses (ASSR is commonly explained by the linear combination of random background noise activity and the stationary response. Based on this model, the decrease of amplitude that occurs over the sequential averaging of epochs of the raw data has been exclusively linked to the cancelation of noise. Nevertheless, this behavior might also reflect the non-stationary response of the ASSR generators. We tested this hypothesis by characterizing the ASSR time course in rats with different auditory maturational stages. ASSR were evoked by 8-kHz tones of different supra-threshold intensities, modulated in amplitude at 115 Hz. Results show that the ASSR amplitude habituated to the sustained stimulation and that dishabituation occurred when deviant stimuli were presented. ASSR habituation increased as animals became adults, suggesting that the ability to filter acoustic stimuli with no-relevant temporal information increased with age. Results are discussed in terms of the current model of the ASSR generation and analysis procedures. They might have implications for audiometric tests designed to assess hearing in subjects who cannot provide reliable results in the psychophysical trials.
Holliday, T A; Nelson, H J; Williams, D C; Willits, N
In a survey of 900 Dalmatian dogs, brainstem auditory-evoked responses (BAER) and clinical observations were used to determine the incidence and sex distribution of bilateral and unilateral BAER abnormalities and their association with heterochromia iridis (HI). To assess the efficacy of BAER testing in guiding breeding programs, data from 749 dogs (subgroup A), considered to be a sample of the population at large, were compared with data from a subgroup (subgroup B; n = 151) in which selection of breeding stock had been based on BAER testing from the beginning of the 4-year survey. Brainstem auditory-evoked responses were elicited by applying click stimuli unilaterally, while applying a white noise masking sound to the contralateral ear. Under these conditions, BAER were either normal, unilaterally absent, or bilaterally absent. Dogs with bilaterally absent BAER were clinically deaf; dogs with unilaterally absent BAER were not clinically deaf but appeared dependent on their BAER-normal ears for their auditory-cued behavior. Dogs with unilaterally absent BAER often were misidentified as normal by uninformed observers. Among the 900 dogs, 648 (72.0%) were normal, 189 (21.0%) had unilateral absence of BAER, and 63 (7.0%) had bilateral absence of BAER or were clinically deaf and assumed to have bilaterally absent BAER (n = 4). Total incidence in the population sampled was assumed to be higher, because some bilaterally affected dogs that would have been members of subgroup A undoubtedly did not come to our attention. Among females, 24.0% were unilaterally abnormal and 8.2% were bilaterally abnormal whereas, among males, 17.8% were unilaterally abnormal and 5.7% were bilaterally abnormal.(ABSTRACT TRUNCATED AT 250 WORDS)
Kaprana, Antigoni E; Chimona, Theognosia S; Papadakis, Chariton E; Velegrakis, Stylianos G; Vardiambasis, Ioannis O; Adamidis, Georgios; Velegrakis, George A
The objective of the present study was to investigate the possible electrophysiological time-related changes in auditory pathway during mobile phone electromagnetic field exposure. Thirty healthy rabbits were enrolled in an experimental study of exposure to GSM-900 radiation for 60 min and auditory brainstem responses (ABRs) were recorded at regular time-intervals during exposure. The study subjects were radiated via an adjustable power and frequency radio transmitter for GSM-900 mobile phone emission simulation, designed and manufactured according to the needs of the experiment. The mean absolute latency of waves III-V showed a statistically significant delay (p < 0.05) after 60, 45 and 15 min of exposure to electromagnetic radiation of 900 MHz, respectively. Interwave latency I-III was found to be prolonged after 60 min of radiation exposure in correspondence to wave III absolute latency delay. Interwave latencies I-V and III-V were found with a statistically significant delay (p < 0.05) after 30 min of radiation. No statistically significant delay was found for the same ABR parameters in recordings from the ear contralateral to the radiation source at 60 min radiation exposure compared with baseline ABR. The ABR measurements returned to baseline recordings 24 h after the exposure to electromagnetic radiation of 900 MHz. The prolongation of interval latencies I-V and III-V indicates that exposure to electromagnetic fields emitted by mobile phone can affect the normal electrophysiological activity of the auditory system, and these findings fit the pattern of general responses to a stressor.
Rudolph, Erica D; Ells, Emma M L; Campbell, Debra J; Abriel, Shelagh C; Tibbo, Philip G; Salisbury, Dean F; Fisher, Derek J
The mismatch negativity (MMN) is an EEG-derived event-related potential (ERP) elicited by any violation of a predicted auditory 'rule', regardless of whether one is attending to the stimuli, and is thought to reflect updating of the stimulus context. Chronic schizophrenia patients exhibit robust MMN deficits, while MMN reduction in first-episode and early phase psychosis is significantly less consistent. Traditional two-tone "oddball" MMN measures of sensory information processing may be considered too simple for use in early phase psychosis in which pathology has not progressed fully, and a paradigm that probes higher order processes may be more appropriate for elucidating auditory change detection deficits. This study investigated whether MMN deficits could be detected in early phase psychosis (EP) patients using an abstract 'missing stimulus' pattern paradigm (Salisbury, 2012). The stimuli were 400 groups of six tones (1000Hz, 50ms duration, 330ms stimulus onset asynchrony), which was presented with an inter-trial interval of 750ms. Occasionally a group contained a deviant, meaning that it was missing either the 4th or 6th tone (50 trials each). EEG recordings of 13 EP patients (≤5year duration of illness) and 15 healthy controls (HC) were collected. Patients and controls did not significantly differ on age or years of education. Analyses of MMN amplitudes elicited by missing stimuli revealed amplitude reductions in EP patients, suggesting that these deficits are present very early in the progression of the illness. While there were no correlations between MMN measures and measures such as duration of illness, medication dosage or age, MMN amplitude reductions were correlated with positive symptomatology (i.e. auditory hallucinations). These findings suggest that MMNs elicited by the 'missing stimulus' paradigm are impaired in psychosis patients early in the progression of illness and that previously reported MMN-indexed deficits related to auditory
Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.
Full Text Available Tinnitus is proposed to be caused by decreased central input from the cochlea, followed by increased spontaneous and evoked subcortical activity that is interpreted as compensation for increased responsiveness of central auditory circuits. We compared equally noise exposed rats separated into groups with and without tinnitus for differences in brain responsiveness relative to the degree of deafferentation in the periphery. We analyzed (1 the number of CtBP2/RIBEYE-positive particles in ribbon synapses of the inner hair cell (IHC as a measure for deafferentation; (2 the fine structure of the amplitudes of auditory brainstem responses (ABR reflecting differences in sound responses following decreased auditory nerve activity and (3 the expression of the activity-regulated gene Arc in the auditory cortex (AC to identify long-lasting central activity following sensory deprivation. Following moderate trauma, 30% of animals exhibited tinnitus, similar to the tinnitus prevalence among hearing impaired humans. Although both tinnitus and no-tinnitus animals exhibited a reduced ABR wave I amplitude (generated by primary auditory nerve fibers, IHCs ribbon loss and high-frequency hearing impairment was more severe in tinnitus animals, associated with significantly reduced amplitudes of the more centrally generated wave IV and V and less intense staining of Arc mRNA and protein in the AC. The observed severe IHCs ribbon loss, the minimal restoration of ABR wave size, and reduced cortical Arc expression suggest that tinnitus is linked to a failure to adapt central circuits to reduced cochlear input.
Otsuka, Asuka; Yumoto, Masato; Kuriki, Shinya; Nakagawa, Seiji
Perceptual degree of consonance or dissonance of a chord is known to be varied as a function of frequency ratio between tones composing the chord. It has been indicated that generation of a sense of dissonance is associated with the auditory steady-state response (ASSR) phase-locked to difference frequencies which are salient in the chords with complex frequency ratios. This study further investigated how the neuromagnetic ASSR would be modulated as a function of the frequency ratio when the acoustic properties of the difference frequency, to which the ASSR was synchronized, was identical in terms of its number, energy and frequency. Neuronal frequency characteristics intrinsic to the ASSR were compensated by utilizing responses to a SAM (Sinusoidally Amplitude Modulated) chirp tone sweeping through the corresponding frequency range. The results showed that ASSR was significantly smaller for the chords with simple frequency ratios than for those with complex frequency ratios. It indicates that the basic neuronal correlates underlying the sensation of consonance/dissonance might be associated with the attenuation rate applied to encode the input information through the afferent auditory pathway. Attentional gating of the thalamo-cortical function might also be one of the factors.
Scimemi, P; Santarelli, R; Selmo, A; Mammano, F
In auditory research, hearing function of mouse mutants is assessed in vivo by evoked potential recording. Evaluation of the response parameters should be performed with reference to the evoked responses recorded from wild-type mice. This study reports normative data calculated on auditory brainstem responses (ABRs) obtained from 20 wild-type C57 BL/6J mice at a postnatal age between 21 and 45 days. Acoustic stimuli consisted tone bursts at 8, 14, 20, 26, 32 kHz, and clicks. Each stimulus was delivered in free field at stimulation intensity starting from a maximum of 100 dB peak equivalent SPL (dB peSPL) at decreasing steps of 10 dB with a repetition rate of 13/sec. Evoked responses were recorded by needle electrodes inserted subcutaneously. At high intensity stimulation, five response waveforms, each consisting of a positive peak and a subsequent negative valley, were identified within 7 msec, and were labelled with sequential capital Roman numerals from I to V. Peak IV was the most robust and stable at low intensities for both tone burst and click stimuli, and was therefore utilized to estimate hearing thresholds. Both latencies and amplitudes of ABR peaks showed good reproducibility with acceptable standard deviations. Mean wave IV thresholds measured across all animals ranged from a maximum of 23 dB peSPL for clicks to a minimum of 7 dB peSPL for 20 kHz-tone burst stimuli. Statistical analysis of the distribution of latencies and amplitudes of peaks from I to V performed for each stimulus type yielded a normative data set which was utilised to obtain the most consistent fitting-curve model. This could serve as a reference for further studies on murine models of hearing loss.
Bell, Brittany A; Phan, Mimi L; Vicario, David S
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions.
Fei Mai; Xiaozhuang Zhang; Qunxin Lai; Yanfei Wu; Nanping Liao; Yi Ye; Zhenghui Zhong
BACKGROUND: Auditory steady-state evoked response (ASSR) is one of the new objective electrophysiological methods to test hearing in infants. It can provide a reliable and complete audiogram with specific frequency to help the hearing diagnosis and rehabilitation of hearing and languaging following auditory screening.OBJECTIVE: To compare the response threshold of ASSR with auditory threshold of visual reinforcement audiometry (VRA) in infants failed in the hearing screening for investigating their hearing loss.DESIGN: A comparative observation.SETTINGS: Maternal and child health care hospitals of Guangdong province, Shunde city, Nanhai city and Huadu district.PARTICIPANTS: Totally 321 infants of 0-3 years undergoing ASSR test were selected from the Hearing Center of Guangdong Maternal and Child Health Care Hospital from January 2002 to December 2004.Informed consents were obtained from their guardians. There were 193 cases (60.2%) of 0-6 months, 31 cases (9.7%) of 7-12 months, 17 cases (5.3%) of 13-18 months, 14 cases (4.4%) of 19-24 months, 33 cases of 25-30 months, and 33 cases (10.2%) of 31-36 months.METHODS: ① The 321 infants failed in the hearing screening were tested under sleeping status, the ranges of response threshold distribution in ASSR of different frequencies were analyzed in each age group. ② The infants above 2 years old were also tested with VRA, and their response thresholds were compared between VRA and ASSR. ③ Evaluative standards: The response threshold was ＜ 30 dB for normal hearing, 31-50 dB for mild hearing loss, 51-70 dB for moderate hearing loss, 71-90 dB for severe hearing loss, and ＞ 91 dB for extremely severe hearing loss.MAIN OUTCOME MEASURES: ① ASSR results of the infants failed in the screening; ② Proportion of cases of each response threshold in each age group; ③ Comparison of ASSR response thresholds and VRA auditory thresholds in the infants of 2-3 years old.RESULTS: ①The response threshold was ＜ 30 dB in 47
Miezejeski, C M; Heaney, G; Belser, R; Sersen, E A
Brainstem auditory evoked response latencies were studied in 80 males (13 with Down syndrome, 23 with developmental disability due to other causes, and 44 with no disability). Latencies for waves P3 and P5 were shorter for the Down syndrome than for the other groups, though at P5, as compared to latencies for the nondisabled group, the difference was not significant. The pattern of left versus right ear responses in the Down syndrome group differed from those of the other groups. This finding was related to research noting decreased lateralization of and decreased ability at receptive and expressive language among people with Down syndrome. Some individuals required sedation. A lateralized effect of sedation was noted.
Full Text Available Transient event-related potentials (ERPs and steady-state responses (SSRs have been popularly employed to investigate the function of the human brain, but their relationship still remains a matter of debate. Some researchers believed that SSRs could be explained by the linear summation of successive transient ERPs (superposition hypothesis, while others believed that SSRs were the result of the entrainment of a neural rhythm driven by the periodic repetition of a sensory stimulus (oscillatory entrainment hypothesis. In the present study, taking auditory modality as an example, we aimed to clarify the distinct features of SSRs, evoked by the 40-Hz and 60-Hz periodic auditory stimulation, as compared to transient ERPs, evoked by a single click. We observed that (1 SSRs were mainly generated by phase synchronization, while late latency responses (LLRs in transient ERPs were mainly generated by power enhancement; (2 scalp topographies of LLRs in transient ERPs were markedly different from those of SSRs; (3 the powers of both 40-Hz and 60-Hz SSRs were significantly correlated, while they were not significantly correlated with the N1 power in transient ERPs; (4 whereas SSRs were dominantly modulated by stimulus intensity, middle latency responses (MLRs were not significantly modulated by both stimulus intensity and subjective loudness judgment, and LLRs were significantly modulated by subjective loudness judgment even within the same stimulus intensity. All these findings indicated that high-frequency SSRs were different from both MLRs and LLRs in transient ERPs, thus supporting the possibility of oscillatory entrainment hypothesis to the generation of SSRs. Therefore, SSRs could be used to explore distinct neural responses as compared to transient ERPs, and help us reveal novel and reliable neural mechanisms of the human brain.
Bocskai, Tímea; Németh, Adrienne; Bogár, Lajos; Pytel, József
Authors investigated sedation quality in children for auditory brainstem response testing. Two-hundred and seventy-six sedation procedures were retrospectively analyzed using recorded data focusing on efficacy of sedation and complications. Intramuscular ketamine-midazolam-atropine combination was administered on sedation preceded by narcotic suppository as pre-medication. On using the combination vital parameters remained within normal range, the complication rate was minimal. Pulse rate, arterial blood pressure and pulse oxymetry readings were stable, hypoventilation developed in 4, apnoea in none of the cases, post-sedation agitation occurred in 3 and nausea and/or vomiting in 2 cases. Repeated administration of narcotic agent was necessary in a single case only. Our practice is suitable for the sedation assisting hearing examinations in children. It has no influence on the auditory brainstem testing, the conditions necessary for the test can be met entirely with minimal side-effects. Our practice provides a more lasting sedation time in children during the examination hence there is no need for the repetition of the narcotics.
Kamei, Hidekazu (Tokyo Women' s Medical Coll. (Japan))
The purpose of this study is to elucidate correlations of several MRI measurements of the cranium and brain, functioning as a volume conductor, to the auditory brain stem response (ABR) in neuro-degenerative disorders. The subjects included forty-seven patients with spinocerebellar degeneration (SCD) and sixteen of amyotrophic lateral sclerosis (ALS). Statistically significant positive correlations were found between I-V and III-V interpeak latencies (IPLs) and the area of cranium and brain in the longitudinal section of SCD patients, and between I-III and III-V IPLs and the area in the longitudinal section of those with ALS. And, also there were statistically significant correlations between the amplitude of the V wave and the area of brain stem as well as that of the cranium in the longitudinal section of SCD patients, and between the amplitude of the V wave and the area of the cerebrum in the longitudinal section of ALS. In conclusion, in the ABR, the IPLs were prolonged and the amplitude of the V wave was decreased while the MRI size of the cranium and brain increased. When the ABR is applied to neuro-degenerative disorders, it might be important to consider not only the conduction of the auditory tracts in the brain stem, but also the correlations of the size of the cranium and brain which act as a volume conductor. (author).
Full Text Available Abstract Background There are about 1.6 billion GSM cellular phones in use throughout the world today. Numerous papers have reported various biological effects in humans exposed to electromagnetic fields emitted by mobile phones. The aim of the present study was to advance our understanding of potential adverse effects of the GSM mobile phones on the human hearing system. Methods Auditory Brainstem Response (ABR was recorded with three non-polarizing Ag-AgCl scalp electrodes in thirty young and healthy volunteers (age 18–26 years with normal hearing. ABR data were collected before, and immediately after a 10 minute exposure to 900 MHz pulsed electromagnetic field (EMF emitted by a commercial Nokia 6310 mobile phone. Fifteen subjects were exposed to genuine EMF and fifteen to sham EMF in a double blind and counterbalanced order. Possible effects of irradiation was analyzed by comparing the latency of ABR waves I, III and V before and after genuine/sham EMF exposure. Results Paired sample t-test was conducted for statistical analysis. Results revealed no significant differences in the latency of ABR waves I, III and V before and after 10 minutes of genuine/sham EMF exposure. Conclusion The present results suggest that, in our experimental conditions, a single 10 minute exposure of 900 MHz EMF emitted by a commercial mobile phone does not produce measurable immediate effects in the latency of auditory brainstem waves I, III and V.
Beech, John R.; Beauvois, Michael W.
Previous research has indicated possible reciprocal connections between phonology and reading, and also connections between aspects of auditory perception and reading. The present study investigates these associations further by examining the potential influence of prenatal androgens using measures of digit ratio (the ratio of the lengths of the…
Full Text Available Auditory selective attention is an important mechanism for top-down selection of the vast amount of auditory information our perceptual system is exposed to. In the present study, the impact of attention on auditory steady-state responses - previously shown to be generated in primary auditory regions - was investigated. This issue is still a matter of debate and recent findings point to a complex pattern of attentional effects on the aSSR. The present study aimed at shedding light on the involvement of ipsilateral and contralateral activations to the attended sound taking into account hemispheric differences and a possible dependency on modulation frequency. In aid of this, a dichotic listening experiment was designed using amplitude-modulated tones that were presented to the left and right ear simultaneously. Participants had to detect target tones in a cued ear while their brain activity was assessed using MEG. Thereby, a modulation of the aSSR by attention could be revealed, interestingly restricted to the left hemisphere and 20 Hz responses: Contralateral activations were enhanced while ipsilateral activations turned out to be reduced. Thus, our findings support and extend recent findings, showing that auditory attention can influence the aSSR, but only under specific circumstances and in a complex pattern regarding the different effects for ipsilateral and contralateral activations.
Full Text Available The auditory steady state response (ASSR is an oscillatory brain response, which is phase locked to the rhythm of an auditory stimulus. ASSRs have been recorded in response to a wide frequency range of modulation and/or repetition, but the physiological features of the ASSRs are somewhat different depending on the modulation frequency. Recently, the 20-Hz ASSR has been emphasized in clinical examinations, especially in the area of psychiatry. However, little is known about the physiological properties of the 20-Hz ASSR, compared to those of the 40-Hz and 80-Hz ASSRs. The effects of contralateral noise on the ASSR are known to depend on the modulation frequency to evoke ASSR. However, the effects of contralateral noise on the 20-Hz ASSR are not known. Here we assessed the effects of contralateral white noise at a level of 70 dB SPL on the 20-Hz and 40-Hz ASSRs using a helmet-shaped magnetoencephalography system in 9 healthy volunteers (8 males and 1 female, mean age 31.2 years. The ASSRs were elicited by monaural 1000-Hz 5-s tone bursts amplitude-modulated at 20 and 39 Hz and presented at 80 dB SPL. Contralateral noise caused significant suppression of both the 20-Hz and 40-Hz ASSRs, although suppression was significantly smaller for the 20-Hz ASSRs than the 40-Hz ASSRs. Moreover, the greatest suppression of both 20-Hz and 40-Hz ASSRs occurred in the right hemisphere when stimuli were presented to the right ear with contralateral noise. The present study newly showed that 20-Hz ASSRs are suppressed by contralateral noise, which may be important both for characterization of the 20-Hz ASSR and for interpretation in clinical situations. Physicians must be aware that the 20-Hz ASSR is significantly suppressed by sound (e.g. masking noise or binaural stimulation applied to the contralateral ear.
Christos I Ioannou
Full Text Available The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB. The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.
Usubuchi, Hajime; Kawase, Tetsuaki; Kanno, Akitake; Yahata, Izumi; Miyazaki, Hiromitsu; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
The auditory steady state response (ASSR) is an oscillatory brain response, which is phase locked to the rhythm of an auditory stimulus. ASSRs have been recorded in response to a wide frequency range of modulation and/or repetition, but the physiological features of the ASSRs are somewhat different depending on the modulation frequency. Recently, the 20-Hz ASSR has been emphasized in clinical examinations, especially in the area of psychiatry. However, little is known about the physiological properties of the 20-Hz ASSR, compared to those of the 40-Hz and 80-Hz ASSRs. The effects of contralateral noise on the ASSR are known to depend on the modulation frequency to evoke ASSR. However, the effects of contralateral noise on the 20-Hz ASSR are not known. Here we assessed the effects of contralateral white noise at a level of 70 dB SPL on the 20-Hz and 40-Hz ASSRs using a helmet-shaped magnetoencephalography system in 9 healthy volunteers (8 males and 1 female, mean age 31.2 years). The ASSRs were elicited by monaural 1000-Hz 5-s tone bursts amplitude-modulated at 20 and 39 Hz and presented at 80 dB SPL. Contralateral noise caused significant suppression of both the 20-Hz and 40-Hz ASSRs, although suppression was significantly smaller for the 20-Hz ASSRs than the 40-Hz ASSRs. Moreover, the greatest suppression of both 20-Hz and 40-Hz ASSRs occurred in the right hemisphere when stimuli were presented to the right ear with contralateral noise. The present study newly showed that 20-Hz ASSRs are suppressed by contralateral noise, which may be important both for characterization of the 20-Hz ASSR and for interpretation in clinical situations. Physicians must be aware that the 20-Hz ASSR is significantly suppressed by sound (e.g. masking noise or binaural stimulation) applied to the contralateral ear.
Fujihira, H; Shiraishi, K
To investigate the relationship between speech auditory brainstem responses (speech ABRs) and word intelligibility under reverberation in elderly adults. Word intelligibility for words under four reverberation times (RTs) of 0, 0.5, 1.0, 1.5s, and speech ABRs to the speech syllable/da/ were obtained from 30 elderly listeners. Root mean square (RMS) amplitudes and discrete Fourier transform (DFT) amplitudes were calculated for ADD and SUB responses in the speech ABRs. No significant correlations were found between the word intelligibility scores under reverberation and the ADD response components. However, in the SUB responses we found that the DFT amplitudes associated with H4-SUB, H5-SUB, H8-SUB, H9-SUB and H10-SUB significantly correlated with the word intelligibility scores for words under reverberation. With Bonferroni correction, the DFT amplitudes for H5-SUB and the intelligibility scores for words with the RT of 0.5s and 1.5s were significant. Word intelligibility under reverberation in elderly listeners is related to their ability to encode the temporal fine structure of speech. The results expand knowledge about subcortical responses of elderly listeners in daily-life listening situations. The SUB responses of speech ABR could be useful as an objective indicator to predict word intelligibility under reverberation. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio
The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.
Yucel, Gunes; Petty, Christopher; McCarthy, Gregory; Belger, Aysenil
New, unusual, and changing events are important environmental cues, and the ability to detect these types of stimuli in the environment constitutes a biologically significant survival skill. We used event-related potentials to examine whether sensory and cognitive neural responses to unattended novel events are modulated by the complexity of a primary visuomotor task. Event-related potentials were elicited by unattended task-irrelevant pitch-deviant tones and novel environmental sounds while study participants performed a continuous visuomotor tracking task at two levels of difficulty, achieved by manipulating the control dynamics of a joystick. The results revealed that increased task complexity modulated evoked sensory and cognitive event-related potential components, indicating that detection of change and novelty in the unattended auditory channel is resource-limited.
Rass, Olga; Forsyth, Jennifer K; Krishnan, Giri P; Hetrick, William P; Klaunig, Mallory J; Breier, Alan; O'Donnell, Brian F; Brenner, Colleen A
The power and phase synchronization of the auditory steady state response (ASSR) at 40 Hz stimulation is usually reduced in schizophrenia (SZ). The sensitivity of the 40 Hz ASSR to schizophrenia spectrum phenotypes, such as schizotypal personality disorder (SPD), or to familial risk has been less well characterized. We compared the ASSR of patients with SZ, persons with schizotypal personality disorder, first degree relatives of patients with SZ, and healthy control participants. ASSRs were obtained to 20, 30, 40 and 50 Hz click trains, and assessed using measures of power (mean trial power or MTP) and phase consistency (phase locking factor or PLF). The MTP to 40 Hz stimulation was reduced in relatives, and there was a trend for MTP reduction in SZ. The 40 Hz ASSR was not reduced in SPD participants. PLF did not differ among groups. These data suggest the 40 Hz ASSR is sensitive to familial risk factors associated with schizophrenia.
Morris, David Jackson; Steinmetzger, Kurt; Tøndering, John
The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2 = 1000 Hz) and subtle (F2 = 100 Hz) diphthongs, while subjects....... Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude...... was significant for components from a broader temporal window which included P1 latency and N1 amplitude. N1 latency did not vary between attention conditions, a finding that may be related to stimulation with a continuous vowel. These data show that a discernible P1-N1-P2 response can be observed to subtle vowel...
Full Text Available Introduction: Noonan syndrome (NS is a heterogeneous genetic disease that affects many parts of the body. It was named after Dr. Jacqueline Anne Noonan, a paediatric cardiologist.Case Report: We report audiological tests and auditory brainstem response (ABR findings in a 5-year old Malay boy with NS. Despite showing the marked signs of NS, the child could only produce a few meaningful words. Audiological tests found him to have bilateral mild conductive hearing loss at low frequencies. In ABR testing, despite having good waveform morphology, the results were atypical. Absolute latency of wave V was normal but interpeak latencies of wave’s I-V, I-II, II-III were prolonged. Interestingly, interpeak latency of waves III-V was abnormally shorter.Conclusion:Abnormal ABR results are possibly due to abnormal anatomical condition of brainstem and might contribute to speech delay.
Miezejeski, C M; Heaney, G; Belser, R; Brown, W T; Jenkins, E C; Sersen, E A
Brainstem auditory evoked response latencies were studies in 75 males (13 with fragile X syndrome, 18 with mental retardation due to other causes, and 44 with no disability). Latency values were obtained for each ear for the positive deflections of waves I (P1), III (P3), and V (P5). Some individuals with mental retardation required sedation. Contrary to previous report, latencies obtained for individuals with fragile X did not differ from those obtained for persons without mental retardation. Persons receiving sedation, whether or not their retardation was due to fragile X, had longer latencies for wave P5 than persons who did not receive sedation. This effect of sedation may also explain the previously reported increased latencies for persons with fragile X.
Cobb, Kensi M; Stuart, Andrew
The purpose of the study was to examine the differences in auditory brainstem response (ABR) latency and amplitude indices to the CE-Chirp stimuli in neonates versus young adults as a function of stimulus level, rate, polarity, frequency and gender. Participants were 168 healthy neonates and 20 normal-hearing young adults. ABRs were obtained to air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps. The effect of stimulus level was also examined with bone-conducted CE-Chirps and CE-Chirp octave band stimuli. The effect of gender was examined across all stimulus manipulations. In general, ABR wave V amplitudes were significantly larger (p CE-Chirp stimuli with all stimulus manipulations. For bone-conducted CE-Chirps, infants had significantly shorter wave V latencies than adults at 15 dB nHL and 45 dB nHL (p = 0.02). Adult wave V amplitude was significantly larger for bone-conducted CE-Chirps only at 30 dB nHL (p = 0.02). The effect of gender was not statistically significant across all measures (p > 0.05). Significant differences in ABR latencies and amplitudes exist between newborns and young adults using CE-Chirp stimuli. These differences are consistent with differences to traditional click and tone burst stimuli and reflect maturational differences as a function of age. These findings continue to emphasize the importance of interpreting ABR results using age-based normative data.
Hu, Shuowen; Olulade, Olumide; Castillo, Javier Gonzalez; Santos, Joseph; Kim, Sungeun; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
A confound for functional magnetic resonance imaging (fMRI), especially for auditory studies, is the presence of imaging acoustic noise generated mainly as a byproduct of rapid gradient switching during volume acquisition and, to a lesser extent, the radiofrequency transmit. This work utilized a novel pulse sequence to present actual imaging acoustic noise for characterization of the induced hemodynamic responses and assessment of linearity in the primary auditory cortex with respect to noise duration. Results show that responses to brief duration (46 ms) imaging acoustic noise is highly nonlinear while responses to longer duration (>1 s) imaging acoustic noise becomes approximately linear, with the right primary auditory cortex exhibiting a higher degree of nonlinearity than the left for the investigated noise durations. This study also assessed the spatial extent of activation induced by imaging acoustic noise, showing that the use of modeled responses (specific to imaging acoustic noise) as the reference waveform revealed additional activations in the auditory cortex not observed with a canonical gamma variate reference waveform, suggesting an improvement in detection sensitivity for imaging acoustic noise-induced activity. Longer duration (1.5 s) imaging acoustic noise was observed to induce activity that expanded outwards from Heschl's gyrus to cover the superior temporal gyrus as well as parts of the middle temporal gyrus and insula, potentially affecting higher level acoustic processing.
Straaten, H.L.M. van; Hille, E.T.M.; Kok, J.H.; Verkerk, P.H.; Baerts, W.; Bunkers, C.M.; Smink, E.W.A.; Elburg, R.M. van; Kleine, M.J.K. de; Ilsen, A.; Maingay-Visser, A.P.G.F.; Vries, L.S. de; Weisglas-Kuperus, N.
Aim: As part of a future national neonatal hearing screening programme in the Netherlands, automated auditory brainstem response (AABR) hearing screening was implemented in seven neonatal intensive care units (NICUs). The objective was to evaluate key outcomes of this programme: participation rate,
Full Text Available Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group and 261 age- and sex-matched normally developing children (control group. Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V than did the control group (P=0.001. These amplitudes were significantly reduced after 1 year (P=0.001; however, they were still significantly higher than those of the control group (P=0.001. The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients’ improvement after treatment.
Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian
A computational model of cat auditory nerve fiber (ANF) responses to electrical stimulation is presented. The model assumes that (1) there exist at least two sites of spike generation along the ANF and (2) both an anodic (positive) and a cathodic (negative) charge in isolation can evoke a spike. ...
Cobb, Kensi M.; Stuart, Andrew
Purpose The purpose of this study was to compare auditory brainstem response (ABR) thresholds to air- and bone-conducted CE-Chirps in neonates and adults. Method Thirty-two neonates with no physical or neurologic challenges and 20 adults with normal hearing participated. ABRs were acquired with a starting intensity of 30 dB normal hearing level…
Stuart, Andrew; Yang, Edward Y.
Simultaneous 3- channel recorded auditory brainstem responses (ABR) were obtained from 20 neonates with various high-pass filter settings and low intensity levels. Results support the advocacy of less restrictive high-pass filtering for neonatal and infant ABR screening to air-conducted and bone-conducted clicks. (Author/JDD)
Full Text Available The aims of the present study were to investigate the ability of hearing-impaired (HI individuals with different binaural hearing conditions to discriminate spatial auditory-sources at the midline and lateral positions, and to explore the possible central processing mechanisms by measuring the minimal audible angle (MAA and mismatch negativity (MMN response. To measure MAA at the left/right 0°, 45° and 90° positions, 12 normal-hearing (NH participants and 36 patients with sensorineural hearing loss, which included 12 patients with symmetrical hearing loss (SHL and 24 patients with asymmetrical hearing loss (AHL [12 with unilateral hearing loss on the left (UHLL and 12 with unilateral hearing loss on the right (UHLR] were recruited. In addition, 128-electrode electroencephalography was used to record the MMN response in a separate group of 60 patients (20 UHLL, 20 UHLR and 20 SHL patients and 20 NH participants. The results showed MAA thresholds of the NH participants to be significantly lower than the HI participants. Also, a significantly smaller MAA threshold was obtained at the midline position than at the lateral position in both NH and SHL groups. However, in the AHL group, MAA threshold for the 90° position on the affected side was significantly smaller than the MMA thresholds obtained at other positions. Significantly reduced amplitudes and prolonged latencies of the MMN were found in the HI groups compared to the NH group. In addition, contralateral activation was found in the UHL group for sounds emanating from the 90° position on the affected side and in the NH group. These findings suggest that the abilities of spatial discrimination at the midline and lateral positions vary significantly in different hearing conditions. A reduced MMN amplitude and prolonged latency together with bilaterally symmetrical cortical activations over the auditory hemispheres indicate possible cortical compensatory changes associated with poor
Lucas L Ferreira
Full Text Available The acute effects after exposure to different styles of music on cardiac autonomic modulation assessed through heart rate variability (HRV analysis have not yet been well elucidated. We aimed to investigate the recovery response of cardiac autonomic modulation in women after exposure to musical auditory stimulation of different styles. The study was conducted on 30 healthy women aged between 18 years and 30 years. We did not include subjects having previous experience with musical instruments and those who had an affinity for music styles. The volunteers remained at rest for 10 min and were exposed to classical baroque (64-84 dB and heavy metal (75-84 dB music for 10 min, and their HRV was evaluated for 30 min after music cessation. We analyzed the following HRV indices: Standard deviation of normal-to-normal (SDNN intervals, root mean square of successive differences (RMSSD, percentage of normal-to-normal 50 (pNN50, low frequency (LF, high frequency (HF, and LF/HF ratio. SDNN, LF in absolute units (ms 2 and normalized (nu, and LF/HF ratio increased while HF index (nu decreased after exposure to classical baroque music. Regarding the heavy metal music style, it was observed that there were increases in SDNN, RMSSD, pNN50, and LF (ms 2 after the musical stimulation. In conclusion, the recovery response of cardiac autonomic modulation after exposure to auditory stimulation with music featured an increased global activity of both systems for the two musical styles, with a cardiac sympathetic modulation for classical baroque music and a cardiac vagal tone for the heavy metal style.
Ferreira, Lucas L; Vanderlei, Luiz Carlos M; Guida, Heraldo L; de Abreu, Luiz Carlos; Garner, David M; Vanderlei, Franciele M; Ferreira, Celso; Valenti, Vitor E
The acute effects after exposure to different styles of music on cardiac autonomic modulation assessed through heart rate variability (HRV) analysis have not yet been well elucidated. We aimed to investigate the recovery response of cardiac autonomic modulation in women after exposure to musical auditory stimulation of different styles. The study was conducted on 30 healthy women aged between 18 years and 30 years. We did not include subjects having previous experience with musical instruments and those who had an affinity for music styles. The volunteers remained at rest for 10 min and were exposed to classical baroque (64-84 dB) and heavy metal (75-84 dB) music for 10 min, and their HRV was evaluated for 30 min after music cessation. We analyzed the following HRV indices: Standard deviation of normal-to-normal (SDNN) intervals, root mean square of successive differences (RMSSD), percentage of normal-to-normal 50 (pNN50), low frequency (LF), high frequency (HF), and LF/HF ratio. SDNN, LF in absolute units (ms 2 ) and normalized (nu), and LF/HF ratio increased while HF index (nu) decreased after exposure to classical baroque music. Regarding the heavy metal music style, it was observed that there were increases in SDNN, RMSSD, pNN50, and LF (ms 2 ) after the musical stimulation. In conclusion, the recovery response of cardiac autonomic modulation after exposure to auditory stimulation with music featured an increased global activity of both systems for the two musical styles, with a cardiac sympathetic modulation for classical baroque music and a cardiac vagal tone for the heavy metal style.
Silva, Daniela da
Full Text Available Introduction Literature data are not conclusive as to the influence of neonatal complications in the maturational process of the auditory system observed by auditory brainstem response (ABR in infants at term and preterm. Objectives Check the real influence of the neonatal complications in infants by the sequential auditory evaluation. Methods Historical cohort study in a tertiary referral center. A total of 114 neonates met inclusion criteria: treatment at the Universal Neonatal Hearing Screening Program of the local hospital; at least one risk indicator for hearing loss; presence in both evaluations (the first one after hospital discharge from the neonatal unit and the second one at 6 months old; all latencies in ABR and transient otoacoustic emissions present in both ears. Results The complications that most influenced the ABR findings were Apgar scores less than 6 at 5 minutes, gestational age, intensive care unit stay, peri-intraventricular hemorrhage, and mechanical ventilation. Conclusion Sequential auditory evaluation is necessary in premature and term newborns with risk indicators for hearing loss to correctly identify injuries in the auditory pathway.
Jovanovic-Bateman, L; Hedreville, R
This prospective study involved 79 homozygote and heterozygote sickle cell anaemia patients (16 to 50 years old) and a control group of 40 people.All patients underwent ENT, audiological and brainstem auditory evoked responses (BSER) examinations in order to evaluate the incidence of sensorineural hearing loss (SNHL), to identify the changes at the level of the cochlear nerve and the central pathways, and to determine the most vulnerable group, in order to intervene with early prevention and rehabilitation for this condition.A hearing loss of greater than 20 dB at two or more frequencies was found in 36 (45.57 per cent) sickle cell patients (19 (47.22 per cent) HbSC patients and 17 (43.59 per cent) HbSS patients) and three (7.5 per cent) members of the control group. Homozygote and heterozygote patients, as well as both sexes, were equally affected. Bilateral hearing loss occurred in 19 (52.78 per cent) patients, unilateral right-sided hearing loss in five (13.89 per cent) patients and unilateral left-sided hearing loss in 12 (33.33 per cent) patients. Brainstem auditory evoked potential demonstrated a prolonged I-V (III-V) interpeak latency in 13 (25.35 per cent) sickle cell patients (11 men (eight with HbSS) and two women). The hearing loss in HbSS patients was neural in nature and of earlier onset; the hearing loss in HbSC patients was usually cochlear in nature and of later onset. Despite high medical standards and 100 per cent social security cover, the high incidence of SNHL in our sickle cell affected patients (the majority with the Benin haplotype) was probably due to their specific haematological profile and to the original geographical distribution of the disease in the tropics. Our results highlight the necessity for early and regular hearing assessment of sickle cell patients, including BSER examination, especially in male patients with SNHL.
The common variant rs1344706 within the zinc-finger protein gene ZNF804A has been strongly implicated in schizophrenia (SZ) susceptibility by a series of recent genetic association studies. Although associated with a pattern of altered neural connectivity, evidence that increased risk is mediated by an effect on cognitive deficits associated with the disorder has been equivocal. This study investigated whether the same ZNF804A risk allele was associated with variation in the P300 auditory-evoked response, a cognitively relevant putative endophenotype for SZ. We compared P300 responses in carriers and noncarriers of the ZNF804A risk allele genotype groups in Irish patients and controls (n=97). P300 response was observed to vary according to genotype in this sample, such that risk allele carriers showed relatively higher P300 response compared with noncarriers. This finding accords with behavioural data reported by our group and others. It is also consistent with the idea that ZNF804A may have an impact on cortical efficiency, reflected in the higher levels of activations required to achieve comparable behavioural accuracy on the task used.
Ouchi, Yoshitaka; Meguro, Kenichi; Akanuma, Kyoko; Kato, Yuriko; Yamaguchi, Satoshi
Background. Alzheimer's disease (AD) patients have a poor response to the voices of caregivers. After administration of donepezil, caregivers often find that patients respond more frequently, whereas they had previously pretended to be “deaf.” We investigated whether auditory selective attention is associated with response to donepezil. Methods. The subjects were40 AD patients, 20 elderly healthy controls (HCs), and 15 young HCs. Pure tone audiometry was conducted and an original Auditory Selective Attention (ASA) test was performed with a MoCA vigilance test. Reassessment of the AD group was performed after donepezil treatment for 3 months. Results. Hearing level of the AD group was the same as that of the elderly HC group. However, ASA test scores decreased in the AD group and were correlated with the vigilance test scores. Donepezil responders (MMSE 3+) also showed improvement on the ASA test. At baseline, the responders had higher vigilance and lower ASA test scores. Conclusion. Contrary to the common view, AD patients had a similar level of hearing ability to healthy elderly. Auditory attention was impaired in AD patients, which suggests that unnecessary sounds should be avoided in nursing homes. Auditory selective attention is associated with response to donepezil in AD. PMID:26161001
Mills, David M; Schmiedt, Richard A
Auditory characteristics of metabolic or strial presbycusis were investigated using an animal model in which young adult Mongolian gerbils ( Meriones unguiculates) were implanted with an osmotic pump supplying furosemide continuously to the round window. This model causes chronic lowering of the endocochlear potential (EP) and results in auditory responses very similar to those seen in quiet-aged gerbils (Schmiedt et al., J. Neurosci. 22:9643-9650, 2002). Auditory function was examined up to one week post-implant by measurement of auditory brainstem responses (ABRs) and distortion product otoacoustic emissions (DPOAEs). Emission "threshold" was defined as the stimulus level required to reach a criterion emission amplitude. Comparing all responses on a "threshold-shift diagram," where emission threshold increases were plotted versus ABR threshold increases, the following results were obtained: (1) On average, the increase of the emission threshold was about 55% of the increase in ABR threshold, with comparatively little scatter. (2) The main dysfunction in metabolic presbycusis appears to be a decrease in the gain of the cochlear amplifier, combined with an additional, smaller increase in neural threshold, both effects caused by a chronically low EP. (3) For ABR threshold increases over 20 dB, the points for the chronic low-EP condition were largely separate from those previously found for permanent acoustic damage. The threshold-shift diagram therefore provides a method for noninvasive differential diagnosis of two common hearing dysfunctions.
Full Text Available The auditory brainstem response (ABR is a test widely used to assess the integrity of the brain stem. Although it is considered to be an auditory-evoked potential that is influenced by the physical characteristics of the stimulus, such as rate, polarity and type of stimulus, it may also be influenced by the change in several parameters. The use of anesthetics may adversely influence the value of the ABR wave latency. One of the anesthetics used for e ABR assessment, especially in animal research, is the ketamine/xylazine combination. Our objective was to determine the influence of the ketamine/xylazine anesthetic on the ABR latency values in adult gerbils. The ABRs of 12 adult gerbils injected with the anesthetic were collected on three consecutive days, or a total of six collections, namely: pre-collection and A, B, C, D, and E collections. Before each collection the gerbil was injected with a dose of ketamine (100 mg/kg/xylazine (4 mg/kg. For the capture of the ABR, 2000 click stimuli were used with rarefaction polarity and 13 stimuli per second, 80 dBnHL intensity and in-ear phones. A statistically significant difference was observed in the latency of the V wave in the ABR of gerbils in the C and D collections compared to the pre-, A and E collections, and no difference was observed between the pre-, A, B, and E collections. We conclude that the use of ketamine/xylazine increases the latency of the V wave of the ABR after several doses injected into adult gerbils; thus clinicians should consider the use of this substance in the assessment of ABR.
Hoshiyama, Minoru; Okamoto, Hidehiko; Kakigi, Ryusuke
We analysed two different neural mechanisms related to the unconscious processing of auditory stimulation, neural adaptation and mismatch negativity (MMN), using magnetoencephalography in healthy non-musicians. Four kinds of conditioning stimulus (CS): white noise, a 675-Hz pure tone, and complex tones with six (CT6) and seven components (CT7), were used for analysing neural adaptation. The seven spectral components of CT7 were spaced by 1/7 octaves between 500 and 906 Hz on the logarithmic scale. The CT6 components contained the same spectral components as CT7, except for the center frequency, 675 kHz. Subjects could not distinguish CT6 from CT7 in a discrimination test. A test stimulus (TS), a 675-Hz tone, was presented after CS, and the effects of the presence of the same 675-Hz frequency in the CS on the magnetoencephalographic response elicited by TS was evaluated. The P2m component following CT7 was significantly smaller in current strength than that following CT6. The equivalent current dipole for P2m was located approximately 10 mm anterior to the preceding N1m. This result indicated that neural adaptation was taking place in the anterior part of the auditory cortex, even if the sound difference was subthreshold. By contrast, the magnetic counterpart of the MMN was not recorded when CT6 and CT7 were used as standard and deviant stimuli, respectively, being consistent with the discrimination test. In conclusion, neural adaptation is considered to be more sensitive than our consciousness or the MMN, or is caused by an independent mechanism.
Zorović, Maja; Hedwig, Berthold
The activity of four types of sound-sensitive descending brain neurons in the cricket Gryllus bimaculatus was recorded intracellularly while animals were standing or walking on an open-loop trackball system. In a neuron with a contralaterally descending axon, the male calling song elicited responses that copied the pulse pattern of the song during standing and walking. The accuracy of pulse copying increased during walking. Neurons with ipsilaterally descending axons responded weakly to sound only during standing. The responses were mainly to the first pulse of each chirp, whereas the complete pulse pattern of a chirp was not copied. During walking the auditory responses were suppressed in these neurons. The spiking activity of all four neuron types was significantly correlated to forward walking velocity, indicating their relevance for walking. Additionally, injection of depolarizing current elicited walking and/or steering in three of four neuron types described. In none of the neurons was the spiking activity both sufficient and necessary to elicit and maintain walking behaviour. Some neurons showed arborisations in the lateral accessory lobes, pointing to the relevance of this brain region for cricket audition and descending motor control.
Verhulst, Sarah; Shera, Christopher A.
Forward and reverse cochlear latency and its relation to the frequency tuning of the auditory filters can be assessed using tone bursts (TBs). Otoacoustic emissions (TBOAEs) estimate the cochlear roundtrip time, while auditory brainstem responses (ABRs) to the same stimuli aim at measuring the auditory filter buildup time. Latency ratios are generally close to two and controversy exists about the relationship of this ratio to cochlear mechanics. We explored why the two methods provide different estimates of filter buildup time, and ratios with large inter-subject variability, using a time-domain model for OAEs and ABRs. We compared latencies for twenty models, in which all parameters but the cochlear irregularities responsible for reflection-source OAEs were identical, and found that TBOAE latencies were much more variable than ABR latencies. Multiple reflection-sources generated within the evoking stimulus bandwidth were found to shape the TBOAE envelope and complicate the interpretation of TBOAE latency and TBOAE/ABR ratios in terms of auditory filter tuning. PMID:27175040
Horne, Colin D F; Sumner, Christian J; Seeber, Bernhard U
We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.
Full Text Available How natural communication sounds are spatially represented across the inferior colliculus, the main center of convergence for auditory information in the midbrain, is not known. The neural representation of the acoustic stimuli results from the interplay of locally differing input and the organization of spectral and temporal neural preferences that change gradually across the nucleus. This raises the question how similar the neural representation of the communication sounds is across these gradients of neural preferences, and whether it also changes gradually. Analyzed neural recordings were multi-unit cluster spike trains from guinea pigs presented with a spectrotemporally rich set of eleven species-specific communication sounds. Using cross-correlation, we analyzed the response similarity of spiking activity across a broad frequency range for neurons of similar and different frequency tuning. Furthermore, we separated the contribution of the stimulus to the correlations to investigate whether similarity is only attributable to the stimulus, or, whether interactions exist between the multi-unit clusters that lead to neural correlations and whether these follow the same representation as the response correlations. We found that similarity of responses is dependent on the neurons' spatial distance for similarly and differently frequency-tuned neurons, and that similarity decreases gradually with spatial distance. Significant neural correlations exist, and contribute to the total response similarity. Our findings suggest that for multi-unit clusters in the mammalian inferior colliculus, the gradual response similarity with spatial distance to natural complex sounds is shaped by neural interactions and the gradual organization of neural preferences.
Griskova, Inga; Mørup, Morten; Parnas, Josef
The aim of this study was to investigate, in healthy subjects, the modulation of amplitude and phase precision of the auditory steady-state response (ASSR) to 40 Hz stimulation in two resting conditions varying in the level of arousal. Previously, ASSR measures have shown to be affected by the le......The aim of this study was to investigate, in healthy subjects, the modulation of amplitude and phase precision of the auditory steady-state response (ASSR) to 40 Hz stimulation in two resting conditions varying in the level of arousal. Previously, ASSR measures have shown to be affected...... it pertinent to know the effects of fluctuations in arousal on passive response to gamma-range stimulation. In nine healthy volunteers trains of 40 Hz click stimuli were applied during two conditions: in the "high arousal" condition subjects were sitting upright silently reading a book of interest; in the "low...
Lawrence, Carlie A; Barry, Robert J
The phasic evoked cardiac response (ECR) produced by innocuous stimuli requiring cognitive processing may be described as the sum of two independent response components. An initial heart rate (HR) deceleration (ECR1), and a slightly later HR acceleration (ECR2), have been hypothesised to reflect stimulus registration and cognitive processing load, respectively. This study investigated the effects of processing load in the ECR and the event-related potential, in an attempt to find similarities between measures found important in the autonomic orienting reflex context and ERP literature. We examined the effects of cognitive load within-subjects, using a long inter-stimulus interval (ISI) ANS-style paradigm. Subjects (N=40) were presented with 30-35 80dB, 1000Hz tones with a variable long ISI (7-9s), and required to silently count, or allowed to ignore, the tone in two counterbalanced stimulus blocks. The ECR showed a significant effect of counting, allowing separation of the two ECR components by subtracting the NoCount from the Count condition. The auditory ERP showed the expected obligatory processing effects in the N1, and substantial effects of cognitive load in the late positive complex (LPC). These data offer support for ANS-CNS connections worth pursuing further in future work.
Keesling, Devan A; Parker, Jordan Paige; Sanchez, Jason Tait
iChirp-evoked auditory brainstem responses (ABRs) yield a larger wave V amplitude at low intensity levels than traditional broadband click stimuli, providing a reliable estimation of hearing sensitivity. However, advantages of iChirp stimulation at high intensity levels are unknown. We tested to see if high-intensity (i.e., 85 dBnHL) iChirp stimulation results in larger and more reliable ABR waveforms than click. Using the commercially available Intelligent Hearing System SmartEP platform, we recorded ABRs from 43 normal hearing young adults. We report that absolute peak latencies were more variable for iChirp and were ~3 ms longer: the latter of which is simply due to the temporal duration of the signal. Interpeak latencies were slightly shorter for iChirp and were most evident between waves I-V. Interestingly, click responses were easier to identify and peak-to-trough amplitudes for waves I, III and V were significantly larger than iChirp. These differences were not due to residual noise levels. We speculate that high intensity iChirp stimulation reduces neural synchrony and conclude that for retrocochlear evaluations, click stimuli should be used as the standard for ABR neurodiagnostic testing.
So, Suzanne Ho-wai; Begemann, Marieke J. H.; Gong, Xianmin; Sommer, Iris E.
Neuroticism has been shown to adversely influence the development and outcome of psychosis. However, how this personality trait associates with the individual’s responses to psychotic symptoms is less well known. Auditory verbal hallucinations (AVHs) have been reported by patients with psychosis and non-clinical individuals. There is evidence that voice-hearers who are more distressed by and resistant against the voices, as well as those who appraise the voices as malevolent and powerful, have poorer outcome. This study aimed to examine the mechanistic association of neuroticism with the cognitive-affective reactions to AVH. We assessed 40 psychotic patients experiencing frequent AVHs, 135 non-clinical participants experiencing frequent AVHs, and 126 healthy individuals. In both clinical and non-clinical voice-hearers alike, a higher level of neuroticism was associated with more distress and behavioral resistance in response to AVHs, as well as a stronger tendency to perceive voices as malevolent and powerful. Neuroticism fully mediated the found associations between childhood trauma and the individuals’ cognitive-affective reactions to voices. Our results supported the role of neurotic personality in shaping maladaptive reactions to voices. Neuroticism may also serve as a putative mechanism linking childhood trauma and psychological reactions to voices. Implications for psychological models of hallucinations are discussed. PMID:27698407
So, Suzanne Ho-Wai; Begemann, Marieke J H; Gong, Xianmin; Sommer, Iris E
Neuroticism has been shown to adversely influence the development and outcome of psychosis. However, how this personality trait associates with the individual's responses to psychotic symptoms is less well known. Auditory verbal hallucinations (AVHs) have been reported by patients with psychosis and non-clinical individuals. There is evidence that voice-hearers who are more distressed by and resistant against the voices, as well as those who appraise the voices as malevolent and powerful, have poorer outcome. This study aimed to examine the mechanistic association of neuroticism with the cognitive-affective reactions to AVH. We assessed 40 psychotic patients experiencing frequent AVHs, 135 non-clinical participants experiencing frequent AVHs, and 126 healthy individuals. In both clinical and non-clinical voice-hearers alike, a higher level of neuroticism was associated with more distress and behavioral resistance in response to AVHs, as well as a stronger tendency to perceive voices as malevolent and powerful. Neuroticism fully mediated the found associations between childhood trauma and the individuals' cognitive-affective reactions to voices. Our results supported the role of neurotic personality in shaping maladaptive reactions to voices. Neuroticism may also serve as a putative mechanism linking childhood trauma and psychological reactions to voices. Implications for psychological models of hallucinations are discussed.
Full Text Available Prestin is the motor protein expressed in the cochlear outer hair cells (OHCs of mammalian inner ear. The electromotility of OHCs driven by prestin is responsible for the cochlear amplification which is required for normal hearing in adult animals. Postnatal expression of prestin and activity of OHCs may contribute to the maturation of hearing in rodents. However, the temporal and spatial expression of prestin in cochlea during the development is not well characterized. In the present study, we examined the expression and function of prestin from the OHCs in apical, middle, and basal turns of the cochleae of postnatal rats. Prestin first appeared at postnatal day 6 (P6 for basal turn, P7 in middle turn, and P9 for apical turn of cochlea. The expression level increased progressively over the next few days and by P14 reached the mature level for all three segments. By comparison with the time course of the development of auditory brainstem response for different frequencies, our data reveal that prestin expression synchronized with the hearing development. The present study suggests that the onset time of hearing may require the expression of prestin and is determined by the mature function of OHCs.
Sininger, Y S; Cone-Wesson, B; Folsom, R C; Gorga, M P; Vohr, B R; Widen, J E; Ekelid, M; Norton, S J
1) To describe the auditory brain stem response (ABR) measurement system and optimized methods used for study of newborn hearing screening. 2) To determine how recording and infant factors related to the screening, using well-defined, specific ABR outcome measures. Seven thousand one hundred seventy-nine infants, 4478 from the neonatal intensive care unit (NICU) and the remaining from the well-baby nursery, were evaluated with an automated ABR protocol in each ear. Two channel recordings were obtained (vertex to mastoid or channel A and vertex to nape of neck or channel B) in response to click stimuli of 30 and 69 dB nHL in all infants as well as 50 dB nHL in infants who did not meet criteria for response at 30 dB. Criteria for response included F(SP) > or =3.1 and a tester-judgment of response. Criteria could be met in the first or repeat test with a maximum of 6144 accepted sweeps per test. More than 99% of infants could complete the ABR protocol. More than 90% of NICU and well-baby nursery infants "passed" given the strict criteria for response, whereas 86% of those with high risk factors met criterion for ABR response detection. The number of infants who did not meet ABR response criteria in one or both ears was systematically related to stimulus level with the largest group not meeting criteria at 30 dB followed by 50 and 69 dB nHL. Meeting criteria on the ABR was positively correlated with the amplitude of wave V, with low noise and low electrode impedance. Factors that predicted how many sweeps would be needed to reach criterion F(SP) included noise level of the test site, state of the baby (for example, quiet sleep versus crying), recording noise, electrode impedance and response latency. Channel A (vertex to mastoid) reached criterion more often than channel B (vertex to nape of neck) due to higher noise in channel B. Average total test time for 30 dB nHL screening in both ears was under 8 minutes. Well babies with risk factors took slightly longer to
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS r....... No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.......Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS...... response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent...
Shin, Yun-Kyoung; Proctor, Robert W
Previous studies have paired a visual-manual Task 1 with an auditory-vocal Task 2 to evaluate whether the psychological refractory period (PRP) effect is eliminated with two ideomotor-compatible tasks (for which stimuli resemble the response feedback). The present study varied the number of stimulus-response alternatives for Task 1 in three experiments to determine whether set-size and PRP effects were absent, as would be expected if the tasks bypass limited-capacity response-selection processes. In Experiments 1 and 2, the visual-manual task was used as Task 1, with lever-movement and keypress responses, respectively. In Experiment 3, the auditory-vocal task was used as Task 1 and the visual-manual task as Task 2. A significant lengthening of reaction time for 4 vs. 2 alternatives was found for the visual-manual Task 1 and the Task 2 PRP effect in Experiments 1 and 2, suggesting that the visual-manual task is not ideomotor compatible. Neither effect of set size was significant for the auditory-vocal Task 1 in Experiment 3, but there was still a Task 2 PRP effect. Our results imply that neither version of the visual-manual task is ideomotor compatible; other considerations suggest that the auditory-vocal task may also still require response selection.
Bianca C. R. de Castro
Full Text Available It is poor in the literature the behavior of the geometric indices of heart rate variability (HRV during the musical auditory stimulation. The objective is to investigate the acute effects of classic musical auditory stimulation on the geometric indexes of HRV in women in response to the postural change maneuver (PCM. We evaluated 11 healthy women between 18 and 25 years old. We analyzed the following indices: Triangular index, Triangular interpolation of RR intervals and Poincarι plot (standard deviation of the instantaneous variability of the beat-to beat heart rate [SD1], standard deviation of long-term continuous RR interval variability and Ratio between the short - and long-term variations of RR intervals [SD1/SD2] ratio. HRV was recorded at seated rest for 10 min. The women quickly stood up from a seated position in up to 3 s and remained standing still for 15 min. HRV was recorded at the following periods: Rest, 0-5 min, 5-10 min and 10-15 min during standing. In the second protocol, the subject was exposed to auditory musical stimulation (Pachelbel-Canon in D for 10 min at seated position before standing position. Shapiro-Wilk to verify normality of data and ANOVA for repeated measures followed by the Bonferroni test for parametric variables and Friedman′s followed by the Dunn′s posttest for non-parametric distributions. In the first protocol, all indices were reduced at 10-15 min after the volunteers stood up. In the protocol musical auditory stimulation, the SD1 index was reduced at 5-10 min after the volunteers stood up compared with the music period. The SD1/SD2 ratio was decreased at control and music period compared with 5-10 min after the volunteers stood up. Musical auditory stimulation attenuates the cardiac autonomic responses to the PCM.
Mogdans, J; Knudsen, E I
1. The optic tectum of the barn owl contains a physiological map of interaural level difference (ILD) that underlies, in part, its map of auditory space. Monaural occlusion shifts the range of ILDs experienced by an animal and alters the correspondence of ILDs with source locations. Chronic monaural occlusion during development induces an adaptive shift in the tectal ILD map that compensates for the effects of the earplug. The data presented in this study indicate that one site of plasticity underlying this adaptive adjustment is in the posterior division of the ventral nucleus of the lateral lemniscus (VLVp), the first site of ILD comparison in the auditory pathway. 2. Single and multiple unit sites were recorded in the optic tecta and VLVps of ketamine-anesthetized owls. The owls were raised from 4 wk of age with one ear occluded with an earplug. Auditory testing, using digitally synthesized dichotic stimuli, was carried out 8-16 wk later with the earplug removed. The adaptive adjustment in ILD coding in each bird was quantified as the shift from normal ILD tuning measured in the optic tectum. Evidence of adaptive adjustment in the VLVp was based on statistical differences between the VLVp's ipsilateral and contralateral to the occluded ear in the sensitivity of units to excitatory-ear and inhibitory-ear stimulation. 3. The balance of excitatory to inhibitory influences on VLVp units was shifted in the adaptive direction in six out of eight owls. In three of these owls, adaptive differences in inhibition, but not in excitation, were found. For this group of owls, the patterns of response properties across the two VLVps can only be accounted for by plasticity in the VLVp. For the other three owls, the possibility that the difference between the two VLVps resulted from damage to one of the VLVps could not be eliminated, and for one of these, plasticity at a more peripheral site (in the cochlea or cochlear nucleus) could also explain the data. In the remaining two
von Hehn, Christian A A; Bhattacharjee, Arin; Kaczmarek, Leonard K
The promoter for the kv3.1 potassium channel gene is regulated by a Ca2+-cAMP responsive element, which binds the transcription factor cAMP response element-binding protein (CREB). Kv3.1 is expressed in a tonotopic gradient within the medial nucleus of the trapezoid body (MNTB) of the auditory brainstem, where Kv3.1 levels are highest at the medial end, which corresponds to high auditory frequencies. We have compared the levels of Kv3.1, CREB, and the phosphorylated form of CREB (pCREB) in a mouse strain that maintains good hearing throughout life, CBA/J (CBA), with one that suffers early cochlear hair cell loss, C57BL/6 (BL/6). A gradient of Kv3.1 immunoreactivity in the MNTB was detected in both young (6 week) and older (8 month) CBA mice. Although no gradient of CREB was detected, pCREB-immunopositive cells were grouped together in distinct clusters along the tonotopic axis. The same pattern of Kv3.1, CREB, and pCREB localization was also found in young BL/6 mice at a time (6 weeks) when hearing is normal. In contrast, at 8 months, when hearing is impaired, the gradient of Kv3.1 was abolished. Moreover, in the older BL/6 mice there was a decrease in CREB expression along the tonotopic axis, and the pattern of pCREB labeling appeared random, with no discrete clusters of pCREB-positive cells along the tonotopic axis. Our findings are consistent with the hypothesis that ongoing activity in auditory brainstem neurons is necessary for the maintenance of Kv3.1 tonotopicity through the CREB pathway.
Full Text Available Abstract Background Primary auditory cortex (AI neurons show qualitatively distinct response features to successive acoustic signals depending on the inter-stimulus intervals (ISI. Such ISI-dependent AI responses are believed to underlie, at least partially, categorical perception of click trains (elemental vs. fused quality and stop consonant-vowel syllables (eg.,/da/-/ta/continuum. Methods Single unit recordings were conducted on 116 AI neurons in awake cats. Rectangular clicks were presented either alone (single click paradigm or in a train fashion with variable ISI (2–480 ms (click-train paradigm. Response features of AI neurons were quantified as a function of ISI: one measure was related to the degree of stimulus locking (temporal modulation transfer function [tMTF] and another measure was based on firing rate (rate modulation transfer function [rMTF]. An additional modeling study was performed to gain insight into neurophysiological bases of the observed responses. Results In the click-train paradigm, the majority of the AI neurons ("synchronization type"; n = 72 showed stimulus-locking responses at long ISIs. The shorter cutoff ISI for stimulus-locking responses was on average ~30 ms and was level tolerant in accordance with the perceptual boundary of click trains and of consonant-vowel syllables. The shape of tMTF of those neurons was either band-pass or low-pass. The single click paradigm revealed, at maximum, four response periods in the following order: 1st excitation, 1st suppression, 2nd excitation then 2nd suppression. The 1st excitation and 1st suppression was found exclusively in the synchronization type, implying that the temporal interplay between excitation and suppression underlies stimulus-locking responses. Among these neurons, those showing the 2nd suppression had band-pass tMTF whereas those with low-pass tMTF never showed the 2nd suppression, implying that tMTF shape is mediated through the 2nd suppression. The
Möller, Malte; Mayr, Susanne; Buchner, Axel
Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Puram, Sidharth V; Barber, Samuel R; Kozin, Elliott D; Shah, Parth; Remenschneider, Aaron; Herrmann, Barbara S; Duhaime, Ann-Christine; Barker, Fred G; Lee, Daniel J
There are no approved Food and Drug Administration indications for pediatric auditory brainstem implant (ABI) surgery in the United States. Our prospective case series aims to determine the safety and feasibility of ABI surgery in pediatric patients age, 2.52 ± 0.39 years). Four patients underwent ABI surgery (age, 19.2 ± 3.43 months), including 4 primary procedures and 1 revision for device failure. Spontaneous device failure occurred in another subject postoperatively. No major/minor complications occurred, including cerebrospinal fluid leak, facial nerve injury, hematoma, and nonauditory stimulation. All subjects detected sound with environmental awareness, and several demonstrated babbling and mimicry. Poor durability of older implants underscores need for updated technology. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
Ying Yang; Yue-Hui Liu; Ming-Fu Fu; Chun-Lin Li; Li-Yan Wang; Qi Wang; Xi-Bin Sun
Background:Early auditory and speech development in home-based early intervention of infants and toddlers with hearing loss younger than 2 years are still spare in China.This study aimed to observe the development of auditory and speech in deaf infants and toddlers who were fitted with hearing aids and/or received cochlear implantation between the chronological ages of 7-24 months,and analyze the effect of chronological age and recovery time on auditory and speech development in the course of home-based early intervention.Methods:This longitudinal study included 55 hearing impaired children with severe and profound binaural deafness,who were divided into Group A (7-12 months),Group B (13-18 months) and Group C (19-24 months) based on the chronological age.Categories auditory performance (CAP) and speech intelligibility rating scale (SIR) were used to evaluate auditory and speech development at baseline and 3,6,9,12,18,and 24 months of habilitation.Descriptive statistics were used to describe demographic features and were analyzed by repeated measures analysis of variance.Results:With 24 months of hearing intervention,78％ of the patients were able to understand common phrases and conversation without lip-reading,96％ of the patients were intelligible to a listener.In three groups,children showed the rapid growth of trend features in each period of habilitation.CAP and SIR scores have developed rapidly within 24 months after fitted auxiliary device in Group A,which performed much better auditory and speech abilities than Group B (P ＜ 0.05) and Group C (P ＜ 0.05).Group B achieved better results than Group C,whereas no significant differences were observed between Group B and Group C (P ＞ 0.05).Conclusions:The data suggested the early hearing intervention and home-based habilitation benefit auditory and speech development.Chronological age and recovery time may be major factors for aural verbal outcomes in hearing impaired children.The development of auditory
Manju, Venugopal; Gopika, Kizhakke Kodiyath; Arivudai Nambi, Pitchai Muthu
Amplitude modulations in the speech convey important acoustic information for speech perception. Auditory steady state response (ASSR) is thought to be physiological correlate of amplitude modulation perception. Limited research is available exploring association between ASSR and modulation detection ability as well as speech perception. Correlation of modulation detection thresholds (MDT) and speech perception in noise with ASSR was investigated in twofold experiments. 30 normal hearing individuals and 11 normal hearing individuals within age range of 18-24 years participated in experiments 1 and 2, respectively. MDTs were measured using ASSR and behavioral method at 60 Hz, 80 Hz, and 120 Hz modulation frequencies in the first experiment. ASSR threshold was obtained by estimating the minimum modulation depth required to elicit ASSR (ASSR-MDT). There was a positive correlation between behavioral MDT and ASSR-MDT at all modulation frequencies. In the second experiment, ASSR for amplitude modulation (AM) sweeps at four different frequency ranges (30-40 Hz, 40-50 Hz, 50-60 Hz, and 60-70 Hz) was recorded. Speech recognition threshold in noise (SRTn) was estimated using staircase procedure. There was a positive correlation between amplitude of ASSR for AM sweep with frequency range of 30-40 Hz and SRTn. Results of the current study suggest that ASSR provides substantial information about temporal modulation and speech perception.
Full Text Available Hearing losses during infancy and childhood have many negative future effects and impacts on the child life and productivity. The earlier detection of hearing losses, the earlier medical intervention and then the greater benefit of remediation will be. During this research a PC-based audiometer is designed and, currently, the audiometer prototype is in its final development steps. It is based on the auditory brainstem response (ABR method. Chirp stimuli instead of traditional click stimuli will be used to invoke the ABR signal. The stimulus is designed to synchronize the hair cells movement when it spreads out over the cochlea. In addition to the available hardware utilization (PC and PCI board, the efforts confined to design and implement a hardware prototype and to develop a software package that enables the system to behave as ABR audiometer. By using such a method and chirp stimulus, it is expected to be able to detect hearing impairment (sensorineural in the first few days of the life and conduct hearing test at low frequency of stimulus. Currently, the intended chirp stimulus has been successfully generated and the implemented module is able to amplify a signal (on the order of ABR signal to a recordable level. Moreover, a NI-DAQ data acquisition board has been chosen to implement the PC-prototype interface.
Yang, E Y; Rupert, A L; Moushegian, G
Two studies, vibrator placement and masking, were performed to evaluate the developmental aspect of bone conduction auditory brain stem response (ABR) in human infants. Subject groups included newborns, 1-yr-olds, and adults. In the vibrator studies, ABRs were obtained from placements of the bone conduction vibrator on the frontal, occipital, and temporal bones. Results showed that temporal placements in neonates and 1-yr-olds produce significantly shorter wave V latencies of ABR than frontal or occipital placements. In adults, differences of wave V latencies from various vibrator placements were comparatively small. In the masking studies, ABRs were acquired from vibrator placements at the temporal bone in the presence of ipsilateral air conducted masking noise from the experimental groups. Results showed that interaural attenuations of bone conduction click stimuli are the largest in neonates, somewhat smaller from 1-yr-olds, and the smallest in adults. The findings of this research strongly suggest that temporal placements for bone conduction ABR should be used, in some instances, when testing infants and 1-yr-olds. The results of this study support the proposition that bone conduction ABR is a feasible and reliable diagnostic tool in testing infants.
Rogers, S H; Edwards, D A; Henderson-Smart, D J; Pettigrew, A G
Middle latency auditory evoked responses (MLAERs) were measured in 21 normal term infants, three to five days after birth and then at 6 weeks, 7 months and 1 year of age. A polyphasic waveform was elicited during natural sleep in all infants at each recording session by monaural click stimulation at a rate of 9 per second. A 70 dBHL stimulus was found to be optimal as the MLAER became less well defined when the stimulus intensity approached the threshold hearing level. The first 60 to 70 msec of the waveform was found to be most stable, with decreasing detectability of peaks at longer latencies. There was no change in wave latency or reproducibility of MLAERs recorded during different sleep states. Waves Po and Na showed a significant decrease in latency with increasing stimulus intensity at term and/or 6 weeks of age. This was not evident for the remainder of the waveform. Waves Po, Na, Pa, Nb, Pb and Nc exhibited significant decreases in latency with age, attaining values indistinguishable from adults by 7 months of age.
Farah I. Corona-Strauss
Full Text Available It has been shown recently that chirp-evoked auditory brainstem responses (ABRs show better performance than click stimulations, especially at low intensity levels. In this paper we present the development, test, and evaluation of a series of notched-noise embedded frequency specific chirps. ABRs were collected in healthy young control subjects using the developed stimuli. Results of the analysis of the corresponding ABRs using a time-scale phase synchronization stability (PSS measure are also reported. The resultant wave V amplitude and latency measures showed a similar behavior as for values reported in literature. The PSS of frequency specific chirp-evoked ABRs reflected the presence of the wave V for all stimulation intensities. The scales that resulted in higher PSS are in line with previous findings, where ABRs evoked by broadband chirps were analyzed, and which stated that low frequency channels are better for the recognition and analysis of chirp-evoked ABRs. We conclude that the development and test of the series of notched-noise embedded frequency specific chirps allowed the assessment of frequency specific ABRs, showing an identifiable wave V for different intensity levels. Future work may include the development of a faster automatic recognition scheme for these frequency specific ABRs.
Jalaei, Bahram; Zakaria, Mohd Normani; Mohd Azmi, Mohd Hafiz Afifi; Nik Othman, Nik Adilah; Sidek, Dinsuhaimi
Gender disparities in speech-evoked auditory brainstem response (speech-ABR) outcomes have been reported, but the literature is limited. The present study was performed to further verify this issue and determine the influence of head size on speech-ABR results between genders. Twenty-nine healthy Malaysian subjects (14 males and 15 females) aged 19 to 30 years participated in this study. After measuring the head circumference, speech-ABR was recorded by using synthesized syllable /da/ from the right ear of each participant. Speech-ABR peaks amplitudes, peaks latencies, and composite onset measures were computed and analyzed. Significant gender disparities were noted in the transient component but not in the sustained component of speech-ABR. Statistically higher V/A amplitudes and less steeper V/A slopes were found in females. These gender differences were partially affected after controlling for the head size. Head size is not the main contributing factor for gender disparities in speech-ABR outcomes. Gender-specific normative data can be useful when recording speech-ABR for clinical purposes.
Andrea S Lowe
Full Text Available Chronic tinnitus, or "ringing of the ears", affects upwards of 15% of the adult population. Identifying a cost-effective and objective measure of tinnitus is needed due to legal concerns and disability issues, as well as for facilitating the effort to assess neural biomarkers. We developed a modified gap-in-noise (GIN paradigm to assess tinnitus in mice using the auditory brainstem response (ABR. We then compared the commonly used acoustic startle reflex gap-prepulse inhibition (gap-PPI and the ABR GIN paradigm in young adult CBA/CaJ mice before and after administrating sodium salicylate (SS, which is known to reliably induce a 16 kHz tinnitus percept in rodents. Post-SS, gap-PPI was significantly reduced at 12 and 16 kHz, consistent with previous studies demonstrating a tinnitus-induced gap-PPI reduction in this frequency range. ABR audiograms indicated thresholds were significantly elevated post-SS, also consistent with previous studies. There was a significant increase in the peak 2 (P2 to peak 1 (P1 and peak 4 (P4 to P1 amplitude ratios in the mid-frequency range, along with decreased latency of P4 at higher intensities. For the ABR GIN, peak amplitudes of the response to the second noise burst were calculated as a percentage of the first noise burst response amplitudes to quantify neural gap processing. A significant decrease in this ratio (i.e. recovery was seen only at 16 kHz for P1, indicating the presence of tinnitus near this frequency. Thus, this study demonstrates that GIN ABRs can be used as an efficient, non-invasive, and objective method of identifying the approximate pitch and presence of tinnitus in a mouse model. This technique has the potential for application in human subjects and also indicates significant, albeit different, deficits in temporal processing in peripheral and brainstem circuits following drug induced tinnitus.
Ioannaou, Christos I; Pereda, Ernesto; Lindsen, Job P.; Bhattacharya, Joydeep
The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In ...
Gallardo, Manuel; Servicio de Otorrinolaringología, Hospital Central de la Fuerza Aérea del Perú; Vera, Carlos; Servicio de Otorrinolaringología, Hospital Central de la Fuerza Aérea del Perú
Objetive: To determine the functional integrity of the brainstem auditory pathway by the auditive brainstem response (ABR) in language-retarded children without pathology in both the middle ear and central nervous system and no neonatal hearing loss risk factors. Design: Retrospective transversal study. Setting: Naval Medical Center and Air Force Central Hospital Otorhinolaryngology Services, Lima. Peru. Material and methods: Analysis of children’s ABR performed in the last ten years included...
Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.
Vallecalle-Sandoval, M H; Heaney, G; Sersen, E; Sturman, J A
A similar development of the brainstem auditory evoked response is present in taurine-supplemented and taurine-deficient kittens between the second postnatal week and the third month of life. Between birth and the second postnatal week kittens from mothers fed the 1% taurine diet showed earlier maturation of the brainstem auditory evoked response as indicated by lower threshold, shorter P1 latency and shorter central conduction time when compared to the kittens from mothers fed the 0.05% taurine diet. These results suggest an important role of taurine in the anatomical and functional development of the auditory system.
Fallon, James B; Shepherd, Robert K; Nayagam, David A X; Wise, Andrew K; Heffer, Leon F; Landry, Thomas G; Irvine, Dexter R F
We have previously shown that neonatal deafness of 7-13 months duration leads to loss of cochleotopy in the primary auditory cortex (AI) that can be reversed by cochlear implant use. Here we describe the effects of a similar duration of deafness and cochlear implant use on temporal processing. Specifically, we compared the temporal resolution of neurons in AI of young adult normal-hearing cats that were acutely deafened and implanted immediately prior to recording with that in three groups of neonatally deafened cats. One group of neonatally deafened cats received no chronic stimulation. The other two groups received up to 8 months of either low- or high-rate (50 or 500 pulses per second per electrode, respectively) stimulation from a clinical cochlear implant, initiated at 10 weeks of age. Deafness of 7-13 months duration had no effect on the duration of post-onset response suppression, latency, latency jitter, or the stimulus repetition rate at which units responded maximally (best repetition rate), but resulted in a statistically significant reduction in the ability of units to respond to every stimulus in a train (maximum following rate). None of the temporal response characteristics of the low-rate group differed from those in acutely deafened controls. In contrast, high-rate stimulation had diverse effects: it resulted in decreased suppression duration, longer latency and greater jitter relative to all other groups, and an increase in best repetition rate and cut-off rate relative to acutely deafened controls. The minimal effects of moderate-duration deafness on temporal processing in the present study are in contrast to its previously-reported pronounced effects on cochleotopy. Much longer periods of deafness have been reported to result in significant changes in temporal processing, in accord with the fact that duration of deafness is a major factor influencing outcome in human cochlear implantees.
Moiseff, A; Haresign, T
1. We studied the response of single units in the central nucleus of the inferior colliculus (ICc) of the barn owl (Tyto alba) to continuously varying interaural phase differences (IPDs) and static IPDs. Interaural phase was varied in two ways: continuously, by delivering tones to each ear that varied by a few hertz (binaural beat, Fig. 1), and discretely, by delaying in fixed steps the phase of sound delivered to one ear relative to the other (static phase). Static presentations were repeated at several IPDs to characterize interaural phase sensitivity. 2. Units sensitive to IPDs responded to the binaural beat stimulus over a broad range of delta f(Fig. 4). We selected a 3-Hz delta f for most of our comparative measurements on the basis of constraints imposed by our stimulus generation system and because it allowed us to reduce the influence of responses to stimulus onset and offset (Fig. 3A). 3. Characteristic interaural time or phase sensitivity obtained by the use of the binaural beat stimulus were comparable with those obtained by the use of the static technique (Fig. 5; r2 = 0.93, Fig. 6). 4. The binaural beat stimulus facilitated the measurement of characteristic delay (CD) and characteristic phase (CP) of auditory units. We demonstrated that units in the owl's inferior colliculus (IC) include those that are maximally excited by specific IPDs (CP = 0 or 1.0) as well as those that are maximally suppressed by specific IPDs (CP = 0.5; Figs. 7 and 8). 5. The selectivity of units sensitive to IPD or interaural time difference (ITD) were weakly influenced by interaural intensity difference (IID).(ABSTRACT TRUNCATED AT 250 WORDS)
Full Text Available Aim: The objective of the present study is the assessment of otoacoustic emissions (OAEs and auditory brainstem responses (ABRs for hearing screening of high risk infants. Study Design: Prospective, hospital-based. Materials and Methods: Distortion product OAEs (DPOAEs and brainstem evoked response audiometry (BERA recordings were obtained for 30 controls and 100 infants with one or more high risk factors, in a sound treated room and the results were interpreted. ABR peak latencies, amplitudes, and waveform morphology in high risk infants were compared with those in control group. DPOAE as screening test was evaluated in terms of various parameters with BERA/ABR taken as gold standard. Results: Absolute latencies of Wave I and Wave V and interpeak latency of I-V were significantly prolonged in high risk group as compared to control group. The most common causes to contribute significantly for hearing impairment were found to be hyperbilirubinemia, birth asphyxia, meningitis/septicemia. DPOAE when compared with ABR taken as gold standard showed that sensitivity of the test was 87.7% (74.5%-94.9% and specificity was 74.5% (60.0%-85.2%. Positive predictive value was 76.7% (63.2%-86.6% and negative predictive value of the test was 86% (71.9%-94.3%. Positive likelihood ratio was 0.29 (0.18-0.46 and negative likelihood ratio was 6.08 (2.82-13.09. Conclusion : ABR/BERA, though highly reliable, is a tedious and time consuming test. DPOAE is a simple and rapid test with relatively higher acceptability but low sensitivity and specificity; therefore, limits its role as independent screening test. DPOAE-ABR test series is an effective way to screen all the high risk infants at the earliest.
Proctor, R W; Dutta, A; Kelly, P L; Weeks, D J
Within the visual-spatial and auditory-verbal modalities, reaction times to a stimulus have been shown to be faster if salient features of the stimulus and response sets correspond than if they do not. Accounts that attribute such stimulus-response compatibility effects to general translation processes predict that similar effects should occur for cross-modal stimulus and response sets. To test this prediction, three experiments were conducted examining four-choice reactions with (1) visual spatial-location stimuli assigned to speech responses, (2) speech stimuli assigned to keypress responses, and (3) symbolic visual stimuli assigned to speech responses. In all the experiments, responses were faster when correspondence between salient features of the stimulus and response sets was maintained, demonstrating that similar principles of translation operate both within and across modalities.
Full Text Available Animal vocalizations in natural settings are invariably accompanied by an acoustic background with a complex statistical structure. We have previously demonstrated that neuronal responses in primary auditory cortex of halothane-anesthetized cats depend strongly on the natural background. Here, we study in detail the neuronal responses to the background sounds and their relationships to the responses to the foreground sounds. Natural bird chirps as well as modifications of these chirps were used. The chirps were decomposed into three components: the clean chirps, their echoes, and the background noise. The last two were weaker than the clean chirp by 13 and 29 dB on average respectively. The test stimuli consisted of the full natural stimulus, the three basic components, and their three pairwise combinations. When the level of the background components (echoes and background noise presented alone was sufficiently loud to evoke neuronal activity, these background components had an unexpectedly strong effect on the responses of the neurons to the main bird chirp. In particular, the responses to the original chirps were more similar on average to the responses evoked by the two background components than to the responses evoked by the clean chirp, both in terms of the evoked spike count and in terms of the temporal pattern of the responses. These results suggest that some of the neurons responded specifically to the acoustic background even when presented together with the substantially louder main chirp, and may imply that neurons in A1 already participate in auditory source segregation.
Treille, Avril; Cordeboeuf, Camille; Vilain, Coriandre; Sato, Marc
Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception. Crucially, shortened latencies of early auditory evoked potentials were also observed during audio-haptic speech perception. Altogether, these results suggest early bimodal interactions during live face-to-face and hand-to-face speech perception in dyadic interactions. Copyright © 2014. Published by Elsevier Ltd.
Jin, Chun Yu; Ozaki, Isamu; Suzuki, Yasumi; Baba, Masayuki; Hashimoto, Isao
We recorded auditory evoked magnetic fields (AEFs) to monaural 400Hz tone bursts and investigated spatio-temporal features of the N100m current sources in the both hemispheres during the time before the N100m reaches at the peak strength and 5ms after the peak. A hemispheric asymmetry was evaluated as the asymmetry index based on the ratio of N100m peak dipole strength between right and left hemispheres for either ear stimulation. The results of asymmetry indices showed right-hemispheric dominance for left ear stimulation but no hemispheric dominance for right ear stimulation. The current sources for N100m in both hemispheres in response to monaural 400Hz stimulation moved toward anterolateral direction along the long axis of the Heschl gyri during the time before it reaches the peak strength; the ipsilateral N100m sources were located slightly posterior to the contralateral N100m ones. The onset and peak latencies of the right hemispheric N100m in response to right ear stimulation are shorter than those of the left hemispheric N100m to left ear stimulation. The traveling distance of the right hemispheric N100m sources following right ear stimulation was longer than that for the left hemispheric ones following left ear stimulation. These results suggest the right-dominant hemispheric asymmetry in pure tone processing.
Murata, Atsuo; Kuroda, Takashi; Karwowski, Waldemar
A warning signal presented via a visual or an auditory cue might interfere with auditory or visual information inside and outside a vehicle. On the other hand, such interference would be certainly reduced if a tactile cue is used. Therefore, it is expected that tactile cues would be promising as warning signals, especially in a noisy environment. In order to determine the most suitable modality of cue (warning) to a visual hazard in noisy environments, auditory and tactile cues were examined in this study. The condition of stimulus onset asynchrony (SOA) was set to 0ms, 500ms, and 1000ms. Two types of noises were used: white noise and noise outside a vehicle recorded in a real-world driving environment. The noise level LAeq (equivalent continuous A-weighted sound pressure level) inside the experimental chamber of each type of noise was adjusted to approximately 60 dB (A), 70 dB (A), and 80 dB (A). As a result, it was verified that tactile warning was more effective than auditory warning. When the noise outside a vehicle from a real-driving environment was used as the noise inside the experimental chamber, the reaction time to the auditory warning was not affected by the noise level.
Nishimura, Akio; Yokosawa, Kazuhiko
In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.
Georg F Meyer
Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar
Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk
Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy.
Full Text Available Background/Aim. A more recent method, the auditory steadystate response (ASSR, has become more and more important test method due to difference that was found in previous investigations between hearing thresholds determined by the ASSR and the pure-tone audiometry (PTA. The aim of this study was to evaluate the reliability of the ASSR in determining the frequency specific hearing thresholds by establishing a correlation between the thresholds determined by PTA, as well as to evaluate the reliability of ASSR in determining the hearing threshold with respect to the level of hearing loss and the configuration of the PTA findings. Methods. The prospective study included 46 subjects (92 ears which were assigned to groups based on their level of hearing loss and audiometric configuration. All the subjects underwent determination of hearing thresholds by PTA and ASSR without insight into their previously obtained PTA results. Results. The overall sample differences between the ASSR and PTA thresholds were 4.1, 2.5, 4.4, and 4.2 dB at 0.5, 1, 2, and 4 kHz, respectively. A high level of correlation was achieved in groups with different configurations of PTA findings. The correlation coefficients between the hearing thresholds determined by ASSR and PTA were significant in subjects with all levels of hearing loss. The differences between hearing thresholds determined by ASSR and PTA were less than 10 dB in 85% of subjects (ranging from 4 dB for moderately severe hearing loss to 7.2 dB for normal hearing. Conclusion. The ASSR is an excellent complementary method for the determination of hearing thresholds at the 4 carrier frequencies, as well as determination of the level of hearing loss and the audiometric configuration.
Tucker, S M; Bhattacharya, J
The Auditory Response Cradle (ARC) is a fully automated microprocessor controlled machine that was designed for the hearing screening of full term neonates. In order to evaluate the ARC, 6000 babies were screened at a district maternity hospital over a period of three years. Every infant subsequently entered a three year follow up programme. One hundred and two babies (1.7%) failed the ARC screen (that is, they failed two ARC tests) and 20 of these were found to have some hearing impairment: in 10 it was severe (80-90 dBHL), in seven moderate (45-60 dBHL), and in three it was mild to moderate (less than 45 dBHL). In addition, of the 20 babies who failed a first test and were discharged before a second could be performed, two were confirmed to have a severe hearing loss; 79 infants failing the screen were cleared on further testing, giving the ARC a false positive rate of 1.3%. On following up all 6000 infants for three years, seven children who passed the neonatal screen were subsequently found to have a hearing loss. For two babies the aetiology was unknown but for five the hearing impairment was either due to a hereditary progressive loss or definite postnatal factors. Progressive and acquired hearing losses cannot be detected at a neonatal screen and this emphasises the need for follow up screens at other stages in the child's life. In this long term study the ARC has been found to have a high detection rate for severe hearing loss and confirms the practical possibility of using a behavioural technique for the universal screening of hearing in neonates. Images Figure 1 Figure 2 PMID:1519957
In recent years, fast spreading worm has become one of the major threats to the security of the Internet and has an increasingly fierce tendency.In view of the insufficiency that based on Kalman filter worm detection algorithm is sensitive to interval, this article presents a new data collection plan and an improved worm early detection method which has some deferent intervals according to the epidemic worm propagation model, then proposes a worm response mechanism for slowing the wide and fast worm propagation effectively.Simulation results show that our methods are able to detect worms accurately and early.
Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina
Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general.
Clemo, H Ruth; Lomber, Stephen G; Meredith, M Alex
In the cat, the auditory field of the anterior ectosylvian sulcus (FAES) is sensitive to auditory cues and its deactivation leads to orienting deficits toward acoustic, but not visual, stimuli. However, in early deaf cats, FAES activity shifts to the visual modality and its deactivation blocks orienting toward visual stimuli. Thus, as in other auditory cortices, hearing loss leads to cross-modal plasticity in the FAES. However, the synaptic basis for cross-modal plasticity is unknown. Therefore, the present study examined the effect of early deafness on the density, distribution, and size of dendritic spines in the FAES. Young cats were ototoxically deafened and raised until adulthood when they (and hearing controls) were euthanized, the cortex stained using Golgi-Cox, and FAES neurons examined using light microscopy. FAES dendritic spine density averaged 0.85 spines/μm in hearing animals, but was significantly higher (0.95 spines/μm) in the early deaf. Size distributions and increased spine density were evident specifically on apical dendrites of supragranular neurons. In separate tracer experiments, cross-modal cortical projections were shown to terminate predominantly within the supragranular layers of the FAES. This distributional correspondence between projection terminals and dendritic spine changes indicates that cross-modal plasticity is synaptically based within the supragranular layers of the early deaf FAES. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Marschik, Peter B; Einspieler, Christa; Sigafoos, Jeff
To assess whether there are qualitatively deviant characteristics in the early vocalizations of children with Rett syndrome, we had 400 native Austrian-German speakers listen to audio recordings of vocalizations from typically developing girls and girls with Rett syndrome. The audio recordings were rated as (a) inconspicuous, (b) conspicuous or (c) not able to decide between (a) and (b). The results showed that participants were accurate in differentiating the vocalizations of typically developing children compared to children with Rett syndrome. However, the accuracy for rating verbal behaviors was dependent on the type of vocalization with greater accuracy for canonical babbling compared to cooing vocalizations. The results suggest a potential role for the use of rating child vocalizations for early detection of Rett syndrome. This is important because clinical criteria related to speech and language development remain important for early identification of Rett syndrome.
Marschik, Peter B.; Einspieler, Christa; Sigafoos, Jeff
To assess whether there are qualitatively deviant characteristics in the early vocalizations of children with Rett syndrome, we had 400 native Austrian-German speakers listen to audio recordings of vocalizations from typically developing girls and girls with Rett syndrome. The audio recordings were rated as (a) inconspicuous, (b) conspicuous or…
Ellenbroek, B.A.; Bruin, N.M.W.J. de; Kroonenberg, P.T.J.M. van den; Luijtelaar, E.L.J.M. van; Cools, A.R.
BACKGROUND: There is now ample evidence that schizophrenia is due to an interaction between genetic and (early) environmental factors which disturbs normal development of the central nervous system and ultimately leads to the development of clinical symptoms. Recently, we showed that a single 24-hou
In this research as a first step we have concentrated on collecting non-intra cortical EEG data of Brainstem Speech Evoked Potentials from human subjects in an Audiology Lab in University of Ottawa. The problems we have considered are the most advanced and most essential problems of interest in Auditory Neural Signal Processing area in the world: The first problem is the Voice Activity Detection (VAD) in Speech Auditory Brainstem Responses (ABR); The second problem is to identify the best De-...
Bathelt, Joe; Dale, Naomi; de Haan, Michelle
Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten; Martens, Sander
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten; Martens, Sander
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were
Rick L Jenison
Full Text Available Spectro-Temporal Receptive Fields (STRFs were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM. A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl's gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl's gyrus recordings elicited by click-train stimuli.
Fontaine, Bertrand; Köppl, Christine; Peña, Jose L
While the barn owl has been extensively used as a model for sound localization and temporal coding, less is known about the mechanisms at its sensory organ, the basilar papilla (homologous to the mammalian cochlea). In this paper, we characterize, for the first time in the avian system, the auditory nerve fiber responses to broadband noise using reverse correlation. We use the derived impulse responses to study the processing of sounds in the cochlea of the barn owl. We characterize the frequency tuning, phase, instantaneous frequency, and relationship to input level of impulse responses. We show that, even features as complex as the phase dependence on input level, can still be consistent with simple linear filtering. Where possible, we compare our results with mammalian data. We identify salient differences between the barn owl and mammals, e.g., a much smaller frequency glide slope and a bimodal impulse response for the barn owl, and discuss what they might indicate about cochlear mechanics. While important for research on the avian auditory system, the results from this paper also allow us to examine hypotheses put forward for the mammalian cochlea.
Hertrich, Ingo; Kirsten, Mareike; Tiemann, Sonja; Beck, Sigrid; Wühle, Anja; Ackermann, Hermann; Rolke, Bettina
Discourse structure enables us to generate expectations based upon linguistic material that has already been introduced. The present magnetoencephalography (MEG) study addresses auditory perception of test sentences in which discourse coherence was manipulated by using presuppositions (PSP) that either correspond or fail to correspond to items in preceding context sentences with respect to uniqueness and existence. Context violations yielded delayed auditory M50 and enhanced auditory M200 cross-correlation responses to syllable onsets within an analysis window of 1.5s following the PSP trigger words. Furthermore, discourse incoherence yielded suppression of spectral power within an expanded alpha band ranging from 6 to 16Hz. This effect showed a bimodal temporal distribution, being significant in an early time window of 0.0-0.5s following the PSP trigger and a late interval of 2.0-2.5s. These findings indicate anticipatory top-down mechanisms interacting with various aspects of bottom-up processing during speech perception.
Temchin, AN; Recio-Spinoso, A; van Dijk, P; Ruggero, MA
Responses to tones, clicks, and noise were recorded from chinchilla auditory-nerve fibers (ANFs). The responses to noise were analyzed by computing the zeroth-, first-, and second-order Wiener kernels (h(0), h(1), and h(2)). The h(1) s correctly predicted the frequency tuning and phases of responses
Lee, Who-Seung; Metcalfe, Neil B; Réale, Denis; Peres-Neto, Pedro R
The trajectory of an animal's growth in early development has been shown to have long-term effects on a range of life-history traits. Although it is known that individual differences in behaviour may also be related to certain life-history traits, the linkage between early growth or development and individual variation in behaviour has received little attention. We used brief temperature manipulations, independent of food availability, to stimulate compensatory growth in juvenile three-spined sticklebacks Gasterosteus aculeatus. Here, we examine how these manipulated growth trajectories affected the sexual responsiveness of the male fish at the time of sexual maturation, explore associations between reproductive behaviour and investment and lifespan and test whether the perceived time stress (until the onset of the breeding season) influenced such trade-offs. We found a negative impact of growth rate on sexual responsiveness: fish induced (by temperature manipulation) to grow slowest prior to the breeding season were consistently quickest to respond to the presence of a gravid female. This speed of sexual responsiveness was also positively correlated with the rate of development of sexual ornaments and time taken to build a nest. However, after controlling for effects of growth rate, those males that had the greatest sexual responsiveness to females had the shortest lifespan. Moreover, the time available to compensate in size before the onset of the breeding season (time stress) affected the magnitude of these effects. Our results demonstrate that developmental perturbations in early life can influence mating behaviour, with long-term effects on longevity.
McGurgan, I J
The past decade has seen the widespread introduction of universal neonatal hearing screening (UNHS) programmes worldwide. Regrettably, such a programme is only now in the process of nationwide implementation in the Republic of Ireland and has been largely restricted to one screening modality for initial testing; namely transient evoked otoacoustic emissions (TEOAE). The aim of this study is to analyse the effects of employing a different screening protocol which utilises an alternative initial test, automated auditory brainstem response (AABR), on referral rates to specialist audiology services.
Vohs, Jenifer L; Chambers, R Andrew; Krishnan, Giri P; O'Donnell, Brian F; Berg, Sarah; Morzorati, Sandra L
Auditory steady-state auditory responses (ASSRs), in which the evoked potential entrains to stimulus frequency and phase, are reduced in magnitude in patients with schizophrenia, particularly at 40 Hz. While the neural mechanisms responsible for ASSR generation and its perturbation in schizophrenia are unknown, it has been hypothesized that the GABAA receptor subtype may have an important role. Using an established rat model of schizophrenia, the neonatal ventral hippocampal lesion (NVHL) model, 40-Hz ASSRs were elicited from NVHL and sham rats to determine if NVHL rats show deficits comparable to schizophrenia, and to examine the role of GABAA receptors in ASSR generation. ASSR parameters were found to be stable across time in both NVHL and sham rats. Manipulation of the GABAA receptor by muscimol, a GABAA agonist, yielded a strong lesion x drug interaction, with ASSR magnitude and synchronization decreased in NVHL and increased in sham rats. The lesion x muscimol interaction was blocked by a GABAA receptor antagonist when given prior to muscimol administration, confirming the observed interaction was GABAA mediated. Together, these data suggest an alteration involving GABAA receptor function, and hence inhibitory transmission, in the neuronal networks responsible for ASSR generation in NVHL rats. These findings are consistent with prior evidence for alterations in GABA neurotransmitter systems in the NVHL model and suggest the utility of this animal modelling approach for exploring neurobiological mechanisms that generate or modulate ASSRs.
Khattar, Deepti; Khaliq, Farah; Vaney, Neelam; Madhu, Sri Venkata
Summary The present study aims to evaluate the functional integrity of the auditory pathway in patients with diabetes taking metformin. A further aim is to assess its association with vitamin B12 deficiency induced by metformin. Thirty diabetics taking metformin and 30 age-matched non-diabetic controls were enrolled. Stimulus-related potentials and vitamin B12 levels were evaluated in all the subjects. The diabetics showed deficient vitamin B12 levels and delayed wave III latency and III–V interpeak latency in the right ear and delayed Na and Pa wave latencies in the left ear compared with the controls. The dose and duration of metformin showed no association with the stimulus-related potentials. Therefore, although vitamin B12 levels were deficient and auditory conduction impairment was present in the diabetics on metformin, this impairment cannot be attributed to the vitamin B12 deficiency. PMID:27358222
Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner
Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.
Matthews, Brandy R.; Chang, Chiung-Chih; De May, Mary; Engstrom, John; Miller, Bruce L
Recent functional neuroimaging studies implicate the network of mesolimbic structures known to be active in reward processing as the neural substrate of pleasure associated with listening to music. Psychoacoustic and lesion studies suggest that there is a widely distributed cortical network involved in processing discreet musical variables. Here we present the case of a young man with auditory agnosia as the consequence of cortical neurodegeneration who continues to experience pleasure when e...
Sharma, Mridula; Purdy, Suzanne C; Munro, Kevin J; Sawaya, Kathleen; Peter, Varghese
Young adults with no history of hearing concerns were tested to investigate their /da/-evoked cortical auditory evoked potentials (P1-N1-P2) recorded from 32 scalp electrodes in the presence and absence of noise at three different loudness levels (soft, comfortable, and loud), at a fixed signal-to-noise ratio (+3 dB). P1 peak latency significantly increased at soft and loud levels, and N1 and P2 latencies increased at all three levels in the presence of noise, compared with the quiet condition. P1 amplitude was significantly larger in quiet than in noise conditions at the loudest level. N1 amplitude was larger in quiet than in noise for the soft level only. P2 amplitude was reduced in the presence of noise to a similar degree at all loudness levels. The differential effects of noise on P1, N1, and P2 suggest differences in auditory processes underlying these peaks. The combination of level and signal-to-noise ratio should be considered when using cortical auditory evoked potentials as an electrophysiological indicator of degraded speech processing.
Eggermont 1993 and references therein; Kvale and Schreiner 1995; Kowalski et al. 1996a; deCharms et al. 1998; Escabi and Schreiner 1999; Theunissen et al...Neurophysiol. 76, 3524–3534. Kvale , M. and C. E. Schreiner (1995). Perturbative m-sequences for auditory systems identification. Acustica 81. Mendelson
Spitzer, M W; Semple, M N
Transformation of binaural response properties in the ascending auditory pathway: influence of time-varying interaural phase disparity. J. Neurophysiol. 80: 3062-3076, 1998. Previous studies demonstrated that tuning of inferior colliculus (IC) neurons to interaural phase disparity (IPD) is often profoundly influenced by temporal variation of IPD, which simulates the binaural cue produced by a moving sound source. To determine whether sensitivity to simulated motion arises in IC or at an earlier stage of binaural processing we compared responses in IC with those of two major IPD-sensitive neuronal classes in the superior olivary complex (SOC), neurons whose discharges were phase locked (PL) to tonal stimuli and those that were nonphase locked (NPL). Time-varying IPD stimuli consisted of binaural beats, generated by presenting tones of slightly different frequencies to the two ears, and interaural phase modulation (IPM), generated by presenting a pure tone to one ear and a phase modulated tone to the other. IC neurons and NPL-SOC neurons were more sharply tuned to time-varying than to static IPD, whereas PL-SOC neurons were essentially uninfluenced by the mode of stimulus presentation. Preferred IPD was generally similar in responses to static and time-varying IPD for all unit populations. A few IC neurons were highly influenced by the direction and rate of simulated motion, but the major effect for most IC neurons and all SOC neurons was a linear shift of preferred IPD at high rates-attributable to response latency. Most IC and NPL-SOC neurons were strongly influenced by IPM stimuli simulating motion through restricted ranges of azimuth; simulated motion through partially overlapping azimuthal ranges elicited discharge profiles that were highly discontiguous, indicating that the response associated with a particular IPD is dependent on preceding portions of the stimulus. In contrast, PL-SOC responses tracked instantaneous IPD throughout the trajectory of simulated
5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.
Full Text Available Introduction: This observational study was carried out to determine the sensitivity and specificity of MB11 BERAphone® , when used for neonatal hearing screening in a postnatal ward setting in comparison against the gold standard, auditory brainstem response (ABR. Materials and Methods: Thirty-seven consecutive newborns (74 ears who either unilaterally or bilaterally failed hearing screening with MB11 BERAphone in the postnatal ward were recruited and a second screening with BERAphone was performed after 1 week along with confirmatory testing using ABR. Results: MB11 BERAphone showed sensitivity of 92.9%, specificity of 50%, positive predictive value of 30.23%, and negative predictive value of 96.77% for the diagnosis of hearing loss. The prevalence of confirmed hearing impairment was 18.9%. The rate of unilateral impairment was 10.8%, and the rate of bilateral impairment was 13.5%. The average ambient noise levels in the postnatal ward setting was 62.1 dB. Conclusion: Although the sensitivity of MB11 BERAphone is good, the specificity is significantly lower when the test is performed in the postnatal ward setting with high ambient noise. Neonates who fail the two-step screening should undergo auditory response for confirming the diagnosis of hearing loss.
Zahra Ghasem Ahmad
Full Text Available Background and Aim: Tinnitus is a common symptom among lots of people but little is known about its origins. This study was aimed at comparing the Auditory Steady-State Response (ASSR thresholds in normal cases and patients with subjective idiopathic tinnitus (SIT in order to diagnose its real origins.Materials and Methods: This case-control study was conducted on 19 patients with tinnitus and 24 normal cases aged 18-40 yr.The patients underwent broad medical tests to roll out any background reason for their tinnitus. ASSR thresholds were estimated in both groups at 20 and 40 amplitude modulation. The patients were selected from tinnitus patients in Research Center in Hazrat Rasoul Hospital, Tehran, Iran.Results: The mean ASSR thresholds at 40HZ modulation were worse in tinnitus patients compared to normal ones (p<0.05 but no significant statistical differences was detected at 20HZ. These results were found in both situations in which we averaged both ears thresholds and when we estimated the thresholds of the ears separately.Conclusion: It seems that the origin of the responses of the modulation of 40Hz, primary auditory cortex, midbrain regions and subcortical areas, in these patients is involved or the origin of their tinnitus is related to some kind of problems in these areas, although more investigation is needed about 20Hz.
Full Text Available Background: Diabetes mellitus (DM is commonly metabolic disorders of carbohydrate in which blood glucose levels are abnormally high due to relative or absolute insulin deficiency. In addition, it is characterized by abnormal metabolism of fat, protein resulting from insulin deficit or insulin action, or both. There are two broad categories of DM are designated as type 1 and type 2. Type 2 diabetes is due to predominantly insulin resistance with relative insulin deficiency noninsulin-dependent DM. Type 2 diabetes is much more common than insulin-dependent DM. Objectives: The aim of this study was to assess, if there is any abnormality in neural conduction in auditory brain-stem pathway in type 2 DM patients having normal hearing sensitivity when compared to age-matched healthy populations. Materials and Methods: This study included middle - aged 25 subjects having normal hearing with diabetes type 2 mellitus. All were submitted to the full audiological history taking, otological examination, basic audiological evaluation and auditory brain-stem response audiometry which was recorded in both ears, followed by calculation of the absolute latencies of wave I, III and V, as well as interpeak latencies I-III, III-V, I-V. Results: Type 2 DM patients showed significant prolonged absolute latencies of I, III (P = 0.001 and interpeak latencies I-III, III-V and I-V in left ear (P = 0.001 and absolute latencies of I, V (P = 0.001, interpeak latencies III-V was statistically significant in right ear. Conclusions: The prolonged absolute latencies and interpeak latencies suggests abnormal neural firing synchronization or in the transmission in the auditory pathways in normal hearing type 2 diabetes mellitus patients.
Full Text Available Impaired self-monitoring and abnormalities of cognitive bias have been implicated as cognitive mechanisms of hallucination; regions fundamental to these processes including inferior frontal gyrus (IFG and superior temporal gyrus (STG are abnormally activated in individuals that hallucinate. A recent study showed activation in IFG-STG to be modulated by auditory attractiveness, but no study has investigated whether these IFG-STG activations are impaired in schizophrenia. We aimed to clarify the cerebral function underlying the perception of auditory attractiveness in schizophrenia patients. Cerebral activation was examined in 18 schizophrenia patients and 18 controls when performing Favourability Judgment Task (FJT and Gender Differentiation Task (GDT for pairs of greetings using event-related functional MRI. A full-factorial analysis revealed that the main effect of task was associated with activation of left IFG and STG. The main effect of Group revealed less activation of left STG in schizophrenia compared with controls, whereas significantly greater activation in schizophrenia than in controls was revealed at the left middle frontal gyrus (MFG, right temporo-parietal junction (TPJ, right occipital lobe, and right amygdala (p<0.05, FDR-corrected. A significant positive correlation was observed at the right TPJ and right MFG between cerebral activation under FJT minus GDT contrast and the score of hallucinatory behaviour on the Positive and Negative Symptom Scale. Findings of hypo-activation in the left STG could designate brain dysfunction in accessing vocal attractiveness in schizophrenia, whereas hyper-activation in the right TPJ and MFG may reflect the process of mentalizing other person’s behaviour by auditory hallucination by abnormality of cognitive bias.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Blackwood, D H; Muir, W J; Roxborough, H M; Walker, M R; Townshend, R; Glabus, M F; Wolff, S
The auditory P300 response and smooth pursuit eye tracking were recorded from a group of 23 male adult subjects who had been diagnosed in childhood as having schizoid personality. No differences were found in these physiological measures between the study group, their matched controls of other child psychiatric patients, and a group of population controls. The essentially negative findings are discussed in the light of abnormalities of these psychophysiological responses previously found in schizophrenic patients, in some of their biological relatives, and in other groups of psychiatric patients, including autistic children and adults with a diagnosis of borderline and schizotypal personality disorder. Results suggest that "schizoid" children, despite their high scores on a measure of schizotypy, do not have schizophrenia spectrum disorder or that schizotypy is a heterogeneous condition.
Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R
The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.
De Pascalis, Vilfredo; Scacchia, Paolo
We evaluated the influence of hypnotizability, pain expectation, placebo analgesia in waking and hypnosis on tonic pain relief. We also investigated how placebo analgesia affects somatic responses (eye blink) and N100 and P200 waves of event-related potentials (ERPs) elicited by auditory startle probes. Although expectation plays an important role in placebo and hypnotic analgesia, the neural mechanisms underlying these treatments are still poorly understood. We used the cold cup test (CCT) to induce tonic pain in 53 healthy women. Placebo analgesia was initially produced by manipulation, in which the intensity of pain induced by the CCT was surreptitiously reduced after the administration of a sham analgesic cream. Participants were then tested in waking and hypnosis under three treatments: (1) resting (Baseline); (2) CCT-alone (Pain); and (3) CCT plus placebo cream for pain relief (Placebo). For each painful treatment, we assessed pain and distress ratings, eye blink responses, N100 and P200 amplitudes. We used LORETA analysis of N100 and P200 waves, as elicited by auditory startle, to identify cortical regions sensitive to pain reduction through placebo and hypnotic analgesia. Higher pain expectation was associated with higher pain reductions. In highly hypnotizable participants placebo treatment produced significant reductions of pain and distress perception in both waking and hypnosis condition. P200 wave, during placebo analgesia, was larger in the frontal left hemisphere while placebo analgesia, during hypnosis, involved the activity of the left hemisphere including the occipital region. These findings demonstrate that hypnosis and placebo analgesia are different processes of top-down regulation. Pain reduction was associated with larger EMG startle amplitudes, N100 and P200 responses, and enhanced activity within the frontal, parietal, and anterior and posterior cingulate gyres. LORETA results showed that placebo analgesia modulated pain-responsive areas
De Pascalis, Vilfredo; Scacchia, Paolo
We evaluated the influence of hypnotizability, pain expectation, placebo analgesia in waking and hypnosis on tonic pain relief. We also investigated how placebo analgesia affects somatic responses (eye blink) and N100 and P200 waves of event-related potentials (ERPs) elicited by auditory startle probes. Although expectation plays an important role in placebo and hypnotic analgesia, the neural mechanisms underlying these treatments are still poorly understood. We used the cold cup test (CCT) to induce tonic pain in 53 healthy women. Placebo analgesia was initially produced by manipulation, in which the intensity of pain induced by the CCT was surreptitiously reduced after the administration of a sham analgesic cream. Participants were then tested in waking and hypnosis under three treatments: (1) resting (Baseline); (2) CCT-alone (Pain); and (3) CCT plus placebo cream for pain relief (Placebo). For each painful treatment, we assessed pain and distress ratings, eye blink responses, N100 and P200 amplitudes. We used LORETA analysis of N100 and P200 waves, as elicited by auditory startle, to identify cortical regions sensitive to pain reduction through placebo and hypnotic analgesia. Higher pain expectation was associated with higher pain reductions. In highly hypnotizable participants placebo treatment produced significant reductions of pain and distress perception in both waking and hypnosis condition. P200 wave, during placebo analgesia, was larger in the frontal left hemisphere while placebo analgesia, during hypnosis, involved the activity of the left hemisphere including the occipital region. These findings demonstrate that hypnosis and placebo analgesia are different processes of top-down regulation. Pain reduction was associated with larger EMG startle amplitudes, N100 and P200 responses, and enhanced activity within the frontal, parietal, and anterior and posterior cingulate gyres. LORETA results showed that placebo analgesia modulated pain-responsive areas
Hahnloser, Richard H R; Wang, Claude Z-H; Nager, Aymeric; Naie, Katja
In mammals, the thalamus plays important roles for cortical processing, such as relay of sensory information and induction of rhythmical firing during sleep. In neurons of the avian cerebrum, in analogy with cortical up and down states, complex patterns of regular-spiking and dense-bursting modes are frequently observed during sleep. However, the roles of thalamic inputs for shaping these firing modes are largely unknown. A suspected key player is the avian thalamic nucleus uvaeformis (Uva). Uva is innervated by polysensory input, receives indirect cerebral feedback via the midbrain, and projects to the cerebrum via two distinct pathways. Using pharmacological manipulation, electrical stimulation, and extracellular recordings of Uva projection neurons, we study the involvement of Uva in zebra finches for the generation of spontaneous activity and auditory responses in premotor area HVC (used as a proper name) and the downstream robust nucleus of the arcopallium (RA). In awake and sleeping birds, we find that single Uva spikes suppress and spike bursts enhance spontaneous and auditory-evoked bursts in HVC and RA neurons. Strong burst suppression is mediated mainly via tonically firing HVC-projecting Uva neurons, whereas a fast burst drive is mediated indirectly via Uva neurons projecting to the nucleus interface of the nidopallium. Our results reveal that cerebral sleep-burst epochs and arousal-related burst suppression are both shaped by sophisticated polysynaptic thalamic mechanisms.
Full Text Available Background and Aim: While most of the people with tinnitus have some degrees of hearing impairment, a small percent of patients admitted to ear, nose and throat clinics or hearing evaluation centers are those who complain of tinnitus despite having normal hearing thresholds. This study was performed to better understanding of the reasons of probable causes of tinnitus and to investigate possible changes in the auditory brainstem function in normal-hearing patients with chronic tinnitus.Methods: In this comparative cross-sectional, descriptive and analytic study, 52 ears (26 with and 26 without tinnitus were examined. Components of the auditory brainstem response (ABR including wave latencies and wave amplitudes were determined in the two groups and analyzed using appropriate statistical methods.Results: The mean differences between the absolute latencies of waves I, III and V was less than 0.1 ms between the two groups that was not statistically significant. Also, the interpeak latency values of waves I-III, III-V and I-V in both groups had no significant difference. Only, the V/I amplitude ratio in the tinnitus group was significantly higher (p=0.04.Conclusion: The changes observed in amplitude of waves, especially in the latter ones, can be considered as an indication of plastic changes in neuronal activity and its possible role in generation of tinnitus in normal-hearing patients.
Alexander, G M; Swerdloff, R S; Wang, C; Davidson, T; McDonald, V; Steiner, B; Hines, M
Mood and response to auditory sexual stimuli were assessed in 33 hypogonadal men receiving testosterone (T) replacement therapy, 10 eugonadal men receiving T in a male contraceptive clinical trial, and 19 eugonadal men not administered T. Prior to and after 6 weeks of hormone administration, men completed a mood questionnaire, rated sexual arousal to and sexual enjoyment of auditory sexual stimuli, and performed a dichotic listening task measuring selective attention for sexual stimuli. Mood questionnaire results suggest that T has positive effects on mood in hypogonadal men when hormone levels are well below the normal male range of values, but does not have any effects on mood when hormone levels are within or above the normal range. However, increased sexual arousal and sexual enjoyment were associated with T administration regardless of gonadal status. Eugonadal men administered T also increased in the bias to attend to sexual stimuli. In contrast, the comparison group of eugonadal men not administered T showed no mood or sexual behavior changes across the two test sessions. These data support a positive relationship between T and sexual interest, sexual arousal, and sexual enjoyment in men.
Martiniano, Eli Carlos; Monteiro, Larissa Raylane Lucas; Valenti, Vitor E.; Sorpreso, Isabel Cristina Esposito; de Abreu, Luiz Carlos
We aimed to evaluate the acute effect of musical auditory stimulation on heart rate autonomic regulation during endodontic treatment. The study included 50 subjects from either gender between 18 and 40 years old, diagnosed with irreversible pulpitis or pulp necrosis of the upper front teeth and endodontic treatment indication. HRV was recorded 10 minutes before (T1), during (T2), and immediately (T3 and T4) after endodontic treatment. The volunteers were randomly divided into two equal groups: exposed to music (during T2, T3, and T4) or not. We found no difference regarding salivary cortisol and anxiety score. In the group with musical stimulation heart rate decreased in T3 compared to T1 and mean RR interval increased in T2 and T3 compared to T1. SDNN and TINN indices decreased in T3 compared to T4, the RMSSD and SD1 increased in T4 compared to T1, the SD2 increased compared to T3, and LF (low frequency band) increased in T4 compared to T1 and T3. In the control group, only RMSSD and SD1 increased in T3 compared to T1. Musical auditory stimulation enhanced heart rate autonomic modulation during endodontic treatment. PMID:28182118
Milana Drumond Ramos Santana
Full Text Available We aimed to evaluate the acute effect of musical auditory stimulation on heart rate autonomic regulation during endodontic treatment. The study included 50 subjects from either gender between 18 and 40 years old, diagnosed with irreversible pulpitis or pulp necrosis of the upper front teeth and endodontic treatment indication. HRV was recorded 10 minutes before (T1, during (T2, and immediately (T3 and T4 after endodontic treatment. The volunteers were randomly divided into two equal groups: exposed to music (during T2, T3, and T4 or not. We found no difference regarding salivary cortisol and anxiety score. In the group with musical stimulation heart rate decreased in T3 compared to T1 and mean RR interval increased in T2 and T3 compared to T1. SDNN and TINN indices decreased in T3 compared to T4, the RMSSD and SD1 increased in T4 compared to T1, the SD2 increased compared to T3, and LF (low frequency band increased in T4 compared to T1 and T3. In the control group, only RMSSD and SD1 increased in T3 compared to T1. Musical auditory stimulation enhanced heart rate autonomic modulation during endodontic treatment.
Full Text Available Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP components such as the mismatch negativity (MMN. In the first part of the paper, we illustrate how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.
Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP
Corina, David P.; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A.; Coffey-Corina, Sharon
Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians’ best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2–8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1–N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children’s early auditory and visual
Scheifele, Peter Martin
Noise pollution has only recently become recognized as a potential danger to marine mammals in general, and to the Beluga Whale (Delphinapterus leucas) in particular. These small gregarious Odontocetes make extensive use of sound for social communication and pod cohesion. The St. Lawrence River Estuary is habitat to a small, critically endangered population of about 700 Beluga whales who congregate in four different sites in its upper estuary. The population is believed to be threatened by the stress of high-intensity, low frequency noise. One way to determine whether noise is having an effect on an animal's auditory ability might be to observe a natural and repeatable response of the auditory and vocal systems to varying noise levels. This can be accomplished by observing changes in animal vocalizations in response to auditory feedback. A response such as this observed in humans and some animals is known as the Lombard Vocal Response, which represents a reaction of the auditory system directly manifested by changes in vocalization level. In this research this population of Beluga Whales was tested to determine whether a vocalization-as-a-function-of-noise phenomenon existed by using Hidden Markhov "classified" vocalizations as targets for acoustical analyses. Correlation and regression analyses indicated that the phenomenon does exist and results of a human subjects experiment along with results from other animal species known to exhibit the response strongly implicate the Lombard Vocal Response in the Beluga.
Pederzoli, Aurora; Mola, Lucrezia
During the life cycle of fish the larval stages are the most interesting and variable. Teleost larvae undergo a daily increase in adaptability and many organs differentiate and become active. These processes are concerted and require an early neuro-immune-endocrine integration. In larvae communication among the nervous, endocrine and immune systems utilizes several known signal molecule families which could be different from those of the adult fish. The immune-neuroendocrine system was studied in several fish species, among which in particular the sea bass (Dicentrarchus labrax), that is a species of great commercial interest, very important in aquaculture and thus highly studied. Indeed the immune system of this species is the best known among marine teleosts. In this review the data on main signal molecules of stress carried out on larvae of fish are considered and discussed. For sea bass active roles in the early immunological responses of some well-known molecules involved in the stress, such as ACTH, nitric oxide, CRF, HSP-70 and cortisol have been proposed. These molecules and/or their receptors are biologically active mainly in the gut before complete differentiation of gut-associated lymphoid tissue (GALT), probably acting in an autocrine/paracrine way. An intriguing idea emerges from all results of these researches; the molecules involved in stress responses, expressed in the adult cells of the hypothalamic-pituitary axis, during the larval life of fish are present in several other localizations, where they perform probably the same role. It may be hypothesized that the functions performed by hypothalamic-pituitary system are particularly important for the survival of the larva and therefore they comprises several other localizations of body. Indeed the larval stages of fish are very crucial phases that include many physiological changes and several possible stress both internal and environmental.
Paris, Tim; Kim, Jeesun; Davis, Chris
An important property of visual speech (movements of the lips and mouth) is that it generally begins before auditory speech. Research using brain-based paradigms has demonstrated that seeing visual speech speeds up the activation of the listener's auditory cortex but it is not clear whether these observed neural processes link to behaviour. It was hypothesized that the very early portion of visual speech (occurring before auditory speech) will allow listeners to predict the following auditory event and so facilitate the speed of speech perception. This was tested in the current behavioural experiments. Further, we tested whether the salience of the visual speech played a role in this speech facilitation effect (Experiment 1). We also determined the relative contributions that visual form (what) and temporal (when) cues made (Experiment 2). The results showed that visual speech cues facilitated response times and that this was based on form rather than temporal cues. Copyright © 2013 Elsevier Inc. All rights reserved.
Full Text Available Abstract Background The cortical activity underlying the perception of vowel identity has typically been addressed by manipulating the first and second formant frequency (F1 & F2 of the speech stimuli. These two values, originating from articulation, are already sufficient for the phonetic characterization of vowel category. In the present study, we investigated how the spectral cues caused by articulation are reflected in cortical speech processing when combined with phonation, the other major part of speech production manifested as the fundamental frequency (F0 and its harmonic integer multiples. To study the combined effects of articulation and phonation we presented vowels with either high (/a/ or low (/u/ formant frequencies which were driven by three different types of excitation: a natural periodic pulseform reflecting the vibration of the vocal folds, an aperiodic noise excitation, or a tonal waveform. The auditory N1m response was recorded with whole-head magnetoencephalography (MEG from ten human subjects in order to resolve whether brain events reflecting articulation and phonation are specific to the left or right hemisphere of the human brain. Results The N1m responses for the six stimulus types displayed a considerable dynamic range of 115–135 ms, and were elicited faster (~10 ms by the high-formant /a/ than by the low-formant /u/, indicating an effect of articulation. While excitation type had no effect on the latency of the right-hemispheric N1m, the left-hemispheric N1m elicited by the tonally excited /a/ was some 10 ms earlier than that elicited by the periodic and the aperiodic excitation. The amplitude of the N1m in both hemispheres was systematically stronger to stimulation with natural periodic excitation. Also, stimulus type had a marked (up to 7 mm effect on the source location of the N1m, with periodic excitation resulting in more anterior sources than aperiodic and tonal excitation. Conclusion The auditory brain areas
Robinson, Christopher W.; Sloutsky, Vladimir M.
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…
Full Text Available The P13 potential is the rodent equivalent of the P50 potential, which is an evoked response recorded at the vertex (Vx 50 msec following an auditory stimulus in humans. Both the P13 and P50 potentials are only present during waking and rapid eye movement (REM sleep, and are considered to be measures of level of arousal. The source of the P13 and P50 potentials appears to be the pedunculopontine nucleus (PPN, a brainstem nucleus with indirect ascending projections to the cortex through the intralaminar thalamus (ILT, mediating arousal, and descending inhibitory projections to the caudal pontine reticular formation (CPRF, which mediates the auditory startle response (SR. We tested the hypothesis that intracranial microinjection (ICM of glutamate (GLU or GLU receptor agonists will increase the activity of PPN neurons, resulting in an increased P13 potential response, and decreased SR due to inhibitory projections from the PPN to the CPRF, in freely moving animals. Cannulae were inserted into the PPN to inject neuroactive agents, screws were inserted into the Vx in order to record the P13 potential, and electrodes inserted into the dorsal nuchal muscle to record electromyograms (EMGs and SR amplitude. Our results showed that ICM of GLU into the PPN dose-dependently increased the amplitude of the P13 potential and decreased the amplitude of the SR. Similarly, ICM of NMDA or KA into the PPN increased the amplitude of the P13 potential. These findings indicate that glutamatergic input to the PPN plays a role in arousal control in vivo, and changes in glutamatergic input, or excitability of PPN neurons, could be implicated in a number of neuropsychiatric disorders with the common symptoms of hyperarousal and REM sleep dysregulation.
Kamitani, Tatsuo; Haruki, Kazuhito; Matsuda, Minoru
This paper presents the experimental evaluation of auditory cognition's effects on visual cognition of video. The influences of seven auditory stimuli on visual recognition are investigated based on experimental data of key-down operations. The key-down operations for locating a moving target by visual and auditory images are monitored by an experiment system originally made by devices including VTR, CRT, Data Recorder, etc.. Regression analysis and EM algorithm are applied to analyzing the experiment data of 350 key-down operations, made with 50 people and 7 auditory stimulus types. The following characteristic results about the influence of auditory stimulus on visual recognition are derived. Firstly, seven people responded too early for every experiment. The average of and the standard deviation of their response times are 439[ms] and 231[ms] respectively. Secondly, the other forty three people responded about 10[ms] after at cases, in which auditory images were presented 30[ms] or 60[ms] before visual images. Also they responded about 10[ms] early at the other cases. Thirdly, as the visual image was dominant information used for the key-down decision making, apparent effects of auditory images on the key-down operation were not measured. Averages and standard deviations of distributions measured by EM algorithm, regarding to 7 auditory stimulus types, are considered and verified with the Card's MHP model of human response.
Navia, Benjamin; Stout, John; Atkins, Gordon
The L3 auditory interneuron in female Acheta domesticus, produces two different responses to the male calling song: an immediate response and a prolonged response. The prolonged response exhibited spiking activity and a correlated prolonged depolarization, both of which are clearly seen in intracellular recordings. The morphology revealed by intracellular staining was clearly the L3 neuron. The amplitude of the prolonged depolarization associated with the prolonged response increased with increases in sound intensity, resulting in increased spiking rates. Both depolarization and sound presentation increased the spiking rate and the slope of pre-potentials (thus leading to spiking threshold more quickly). Injecting hyperpolarizing current had the expected opposite effect. The effects of positive current injection and sound presentation were additive, resulting in spiking rates that were approximately double the rates in response to sound alone. Short postsynaptic potentials (PSPs), whose duration ranged from 15-60 ms, which may lead to action potentials were also observed in all recordings and summated with the prolonged depolarization, increasing the probability of spiking.
Collignon, O; Davare, M; Olivier, E; De Volder, A G
It is well known that, following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In particular, several studies have demonstrated an extensive activation of occipital brain areas, usually regarded as essentially "visual", when early blind subjects (EB) performed a task that requires spatial processing of sounds. However, little is known about the possible consequences of the activation of occipitals area on the function of the large cortical network known, in sighted subjects, to be involved in the processing of auditory spatial information. To address this issue, we used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right intra-parietal sulcus (rIPS) or the right dorsal extrastriate occipital cortex (rOC) at different delays in EB subjects performing a sound lateralization task. Surprisingly, TMS applied over rIPS, a region critically involved in the spatial processing of sound in sighted subjects, had no influence on the task performance in EB. In contrast, TMS applied over rOC 50 ms after sound onset, disrupted the spatial processing of sounds originating from the contralateral hemifield. The present study shed new lights on the reorganisation of the cortical network dedicated to the spatial processing of sounds in EB by showing an early contribution of rOC and a lesser involvement of rIPS.
SUN Qing; SUN Jian-he; SHAN Xi-zheng; LI Xing-qi
Objective To investigate changes in evoked potentials and structure of the guinea pig cochleae during whole cochlear perfusion with glutamate. Methods CM, CAP, DPOAE, and ABR were recorded as indicators of cochlear functions during whole cochlear perfusion. The morphology of the cochlea was studied via transmission electron microscopy. Results There were no significant changes in DPOAE amplitude before and after glutamate perfusion. CM I/O function remained nonlinear during perfusion. ABR latencies were delayed following glutamate perfusion. The average CAP threshold was elevated 35 dB SPL following glutamate perfusion.. The OHCs appeared normal, but the IHCs and afferent dendrites showed cytoplasmic blebs after glutamate perfusion. Conclusions While being a primary amino acid neurotransmitter at the synapses between hair cells and spiral ganglion neurons, excessive glutamate is neurotoxic and can destroy IHCs and spiral ganglion neurons. The technique used in this study can also be used to build an animal model of auditory neuropathy.
Matthews, Brandy R; Chang, Chiung-Chih; De May, Mary; Engstrom, John; Miller, Bruce L
Recent functional neuroimaging studies implicate the network of mesolimbic structures known to be active in reward processing as the neural substrate of pleasure associated with listening to music. Psychoacoustic and lesion studies suggest that there is a widely distributed cortical network involved in processing discreet musical variables. Here we present the case of a young man with auditory agnosia as the consequence of cortical neurodegeneration who continues to experience pleasure when exposed to music. In a series of musical tasks, the subject was unable to accurately identify any of the perceptual components of music beyond simple pitch discrimination, including musical variables known to impact the perception of affect. The subject subsequently misidentified the musical character of personally familiar tunes presented experimentally, but continued to report that the activity of 'listening' to specific musical genres was an emotionally rewarding experience. The implications of this case for the evolving understanding of music perception, music misperception, music memory, and music-associated emotion are discussed.
Full Text Available Abstract Background Tinnitus is an auditory sensation frequently following hearing loss. After cochlear injury, deafferented neurons become sensitive to neighbouring intact edge-frequencies, guiding an enhanced central representation of these frequencies. As psychoacoustical data 123 indicate enhanced frequency discrimination ability for edge-frequencies that may be related to a reorganization within the auditory cortex, the aim of the present study was twofold: 1 to search for abnormal auditory mismatch responses in tinnitus sufferers and 2 relate these to subjective indicators of tinnitus. Results Using EEG-mismatch negativity, we demonstrate abnormalities (N = 15 in tinnitus sufferers that are specific to frequencies located at the audiometrically normal lesion-edge as compared to normal hearing controls (N = 15. Groups also differed with respect to the cortical locations of mismatch responsiveness. Sources in the 90–135 ms latency window were generated in more anterior brain regions in the tinnitus group. Both measures of abnormality correlated with emotional-cognitive distress related to tinnitus (r ~ .76. While these two physiological variables were uncorrelated in the control group, they were correlated in the tinnitus group (r = .72. Concerning relationships with parameters of hearing loss (depth and slope, slope turned out to be an important variable. Generally, the steeper the hearing loss is the less distress related to tinnitus was reported. The associations between slope and the relevant neurophysiological variables are in agreement with this finding. Conclusions The present study is the first to show near-to-complete separation of tinnitus sufferers from a normal hearing control group based on neurophysiological variables. The finding of lesion-edge specific effects and associations with slope of hearing loss corroborates the assumption that hearing loss is the basis for tinnitus development. It is likely that some central
Jørgensen, M B; Christensen-Dalsgaard, J
We studied the directionality of spike timing in the responses of single auditory nerve fibers of the grass frog, Rana temporaria, to tone burst stimulation. Both the latency of the first spike after stimulus onset and the preferred firing phase during the stimulus were studied. In addition, the ...
Leo L Lui
Full Text Available Interaural level differences (ILDs are the dominant cue for localizing the sources of high frequency sounds that differ in azimuth. Neurons in the primary auditory cortex (A1 respond differentially to ILDs of simple stimuli such as tones and noise bands, but the extent to which this applies to complex natural sounds, such as vocalizations, is not known. In sufentanil/N2O anaesthetized marmosets, we compared the responses of 76 A1 neurons to three vocalizations (Ock, Tsik and Twitter and pure tones at cells’ characteristic frequency. Each stimulus was presented with ILDs ranging from 20dB favouring the contralateral ear to 20dB favouring the ipsilateral ear to cover most of the frontal azimuthal space. The response to each stimulus was tested at three average binaural levels (ABLs. Most neurons were sensitive to ILDs of vocalizations and pure tones. For all stimuli, the majority of cells had monotonic ILD sensitivity functions favouring the contralateral ear, but we also observed ILD sensitivity functions that peaked near the midline and functions favouring the ipsilateral ear. Representation of ILD in A1 was better for pure tones and the Ock vocalization in comparison to the Tsik and Twitter calls; this was reflected by higher discrimination indices and greater modulation ranges. ILD sensitivity was heavily dependent on ABL: changes in ABL by ±20 dB SPL from the optimal level for ILD sensitivity led to significant decreases in ILD sensitivity for all stimuli, although ILD sensitivity to pure tones and Ock calls was most robust to such ABL changes. Our results demonstrate differences in ILD coding for pure tones and vocalizations, showing that ILD sensitivity in A1 to complex sounds cannot be simply extrapolated from that to pure tones. They also show A1 neurons do not show level-invariant representation of ILD, suggesting that such a representation of auditory space is likely to require population coding, and further processing at subsequent
Bendor, Daniel; WANG, Xiaoqin
The core region of primate auditory cortex contains a primary and two primary-like fields (AI, primary auditory cortex; R, rostral field; RT, rostrotemporal field). Although it is reasonable to assume that multiple core fields provide an advantage for auditory processing over a single primary field, the differential roles these fields play and whether they form a functional pathway collectively such as for the processing of spectral or temporal information are unknown. In this report we compa...
Pickles, James O
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.
Itatani, Naoya; Klump, Georg M
Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.
Mason, Christine R; Idrobo, Fabio; Early, Susan J; Abibi, Ayome; Zheng, Ling; Harrison, J Michael; Carney, Laurel H
Experimental studies were performed using a Pavlovian-conditioned eyeblink response to measure detection of a variable-sound-level tone (T) in a fixed-sound-level masking noise (N) in rabbits. Results showed an increase in the asymptotic probability of conditioned responses (CRs) to the reinforced TN trials and a decrease in the asymptotic rate of eyeblink responses to the non-reinforced N presentations as a function of the sound level of the T. These observations are consistent with expected behaviour in an auditory masked detection task, but they are not consistent with predictions from a traditional application of the Rescorla-Wagner or Pearce models of associative learning. To implement these models, one typically considers only the actual stimuli and reinforcement on each trial. We found that by considering perceptual interactions and concepts from signal detection theory, these models could predict the CS dependence on the sound level of the T. In these alternative implementations, the animals response probabilities were used as a guide in making assumptions about the "effective stimuli".
ZHANG Yun-ting; GENG Zuo-jun; ZHANG Quan; LI Wei; ZHANG Jing
hearing loss and the healthy subjects, the most evident audio evoked fields activated by pure tone were N100m,which located precisely on the Heschl's gyms. Compared with the hearing loss subjects, N100m of the healthy subjects was stronger and had longer latencies in fight hemisphere.Conclusions Under proper pure tone stimulus the activation of auditory cortex can be elicited both in the healthy and the sensorineural hearing loss subjects. Either at objective equivalent stimuli or at subjectively perceived equivalent stimuli, the auditory responses were more intensive in healthy subjects than hearing loss subjects. The tone stimuli were processed in a network in human brain and there was an intrinsic relation between the auditory and visual cortex. Blood oxygen level dependent fMRI and magnetoencephalography could reinforce each other.
Full Text Available Background. Alzheimer’s disease (AD patients have a poor response to the voices of caregivers. After administration of donepezil, caregivers often find that patients respond more frequently, whereas they had previously pretended to be “deaf.” We investigated whether auditory selective attention is associated with response to donepezil. Methods. The subjects were40 AD patients, 20 elderly healthy controls (HCs, and 15 young HCs. Pure tone audiometry was conducted and an original Auditory Selective Attention (ASA test was performed with a MoCA vigilance test. Reassessment of the AD group was performed after donepezil treatment for 3 months. Results. Hearing level of the AD group was the same as that of the elderly HC group. However, ASA test scores decreased in the AD group and were correlated with the vigilance test scores. Donepezil responders (MMSE 3+ also showed improvement on the ASA test. At baseline, the responders had higher vigilance and lower ASA test scores. Conclusion. Contrary to the common view, AD patients had a similar level of hearing ability to healthy elderly. Auditory attention was impaired in AD patients, which suggests that unnecessary sounds should be avoided in nursing homes. Auditory selective attention is associated with response to donepezil in AD.
Full Text Available BACKGROUND: Sudden sensorineural hearing loss (SSNHL is a perplexing condition for patients and there are many controversies about its etiology, audiologic characteristics, prognostic factors, and treatment. METHODS: In this prospective study, we performed some audiologic tests, including PTA, IA, ABR, and OAE (TEOAE before beginning treatment of 53 patients with SSNHL. We assigned the patients randomly to two treatment groups: oral steroids + acyclovir vs. intravenous urographin. Twenty-eight patients underwent Magnetic Resonance Imaging (MRI of the Brain. RESULTS: Of 53 patients (22 female and 31 male, 22 (41.5% had negative or no signal to noise ratio and overall correlation in TEOAE. Twenty-six patients (49% had positive overall correlations less than 50%, and 5 patients (4.4% had overall correlations >50%. Fifteen patients (28. 3% responded completely or well, 20 (37.7% responded partially, and 18 (33.9% had poor or no response to the treatment. The mean values for overall correlation in 3 subgroups of patients (no response, partial response, and complete response were – 3. 5% (+ 1/16%, +11% (+ 1/99%, and +36.6% (+3/07%, respectively (P = 0.01. Twenty out of 52 patients had no reproducible wave in ABR (38.5%, and waves I, III, and V were absent in 40 (77%, 31 (59.6% and 21 (40% patients, respectively. There were some limitations (false positive and false negative results in ABR use in our cases, but it may be useful in detecting site of lesion in SSNHL. Overall, according to the results of OAE, ABR, and brain MRI of these patients, 3 were affected by acoustic neurinomas, at least 1 had auditory neuropathy, and the site of lesion was cochlear in 6, and cochlear + retrocochlear in 13 patients. CONCLUSIONS: ABR has limitations for use in SSNHL and seems not to obviate the need for brain MRI, but may help in determining the site of lesions such as ischemia or neuropathy. Overall correlation (and S/N ratio in TEOAE is a valuable
Yang, L M; Vicario, D S
Perceptual filters formed early in development provide an initial means of parsing the incoming auditory stream. However, these filters may not remain fixed, and may be updated by subsequent auditory input, such that, even in an adult organism, the auditory system undergoes plastic changes to achieve a more efficient representation of the recent auditory environment. Songbirds are an excellent model system for experimental studies of auditory phenomena due to many parallels between song learning in birds and language acquisition in humans. In the present study, we explored the effects of passive immersion in a novel heterospecific auditory environment on neural responses in caudo-medial neostriatum (NCM), a songbird auditory area similar to the secondary auditory cortex in mammals. In zebra finches, a well-studied species of songbirds, NCM responds selectively to conspecific songs and contains a neuronal memory for tutor and other familiar conspecific songs. Adult male zebra finches were randomly assigned to either a conspecific or heterospecific auditory environment. After 2, 4 or 9 days of exposure, subjects were presented with heterospecific and conspecific songs during awake electrophysiological recording. The neural response strength and rate of adaptation to the testing stimuli were recorded bilaterally. Controls exposed to conspecific environment sounds exhibited the normal pattern of hemispheric lateralization with higher absolute response strength and faster adaptation in the right hemisphere. The pattern of lateralization was fully reversed in birds exposed to heterospecific environment for 4 or 9 days and partially reversed in birds exposed to heterospecific environment for 2 days. Our results show that brief passive exposure to a novel category of sounds was sufficient to induce a gradual reorganization of the left and right secondary auditory cortices. These changes may reflect modification of perceptual filters to form a more efficient representation
Ramsier, Marissa A; Dominy, Nathaniel J
Primates depend on acoustic signals and cues to avoid predators, locate food, and share information. Accordingly, the structure and function of acoustic stimuli have long been emphasized in studies of primate behavioral and cognitive ecology. Yet, few studies have addressed how well primates hear such stimuli; indeed, the auditory thresholds of most primate species are unknown. This empirical void is due in part to the logistic and economic challenges attendant on traditional behavioral testing methods. Technological advances have produced a safe and cost-effective alternative-the auditory brainstem response (ABR) method, which can be utilized in field conditions, on virtually any animal species, and without subject training. Here we used the ABR and four methods of threshold determination to construct audiograms for two strepsirrhine primates: the ring-tailed lemur (Lemur catta) and slow loris (Nycticebus coucang). Next, to verify the general efficacy of the ABR method, we compared our results to published behaviorally-derived audiograms. We found that the four ABR threshold detection methods produced similar results, including relatively elevated thresholds but similarly shaped audiograms compared to those derived behaviorally. The ABR and behavioral absolute thresholds were significantly correlated, and the frequencies of best sensitivity and high-frequency limits were comparable. However, at frequencies Lemur, the ABR 10-dB range starting points were more than 2 octaves higher than the behavioral points. Finally, a comparison of ABR- and behaviorally-derived audiograms from various animal taxa demonstrates the widespread efficacy of the ABR for estimating frequency of best sensitivity, but otherwise suggests caution; factors such as stimulus properties and threshold definition affect results. We conclude that the ABR method is a promising technique for estimating primate hearing sensitivity, but that additional data are required to explore its efficacy for
Koerner, Tess K; Zhang, Yang
This study investigated the effects of a speech-babble background noise on inter-trial phase coherence (ITPC, also referred to as phase locking value (PLV)) and auditory event-related responses (AERP) to speech sounds. Specifically, we analyzed EEG data from 11 normal hearing subjects to examine whether ITPC can predict noise-induced variations in the obligatory N1-P2 complex response. N1-P2 amplitude and latency data were obtained for the /bu/syllable in quiet and noise listening conditions. ITPC data in delta, theta, and alpha frequency bands were calculated for the N1-P2 responses in the two passive listening conditions. Consistent with previous studies, background noise produced significant amplitude reduction and latency increase in N1 and P2, which were accompanied by significant ITPC decreases in all the three frequency bands. Correlation analyses further revealed that variations in ITPC were able to predict the amplitude and latency variations in N1-P2. The results suggest that trial-by-trial analysis of cortical neural synchrony is a valuable tool in understanding the modulatory effects of background noise on AERP measures.
Nielsen, Andreas Højlund; Gebauer, Line; Mcgregor, William
Objective: An early component of the auditory event-related potential (ERP), the mismatch negativity (MMN/MMNm), has been shown to be sensitive to native versus non-native language sounds (Brandmeyer et al., 2012; Kazanina et al., 2006; Näätänen et al., 1997); i.e. sensitive to phonemic versus...... allophonic sound contrasts. So far this has only been attested between languages. In the present study we wished to investigate this effect within the same language: Does the same sound contrast that is phonemic in one environment, but allophonic in another, elicit different MMNm responses in native......] thus as deviants, respectively. Data were preprocessed using Elekta’s MaxFilter software and SPM8, all statistical analyses were conducted in sensor-space using SPM8. Results: Focusing on the 150-300 ms time period after stimulus onset (typical MMNm time range for language sound contrasts), only...
Dettman, Shani; Wall, Elizabeth; Constantinescu, Gabriella; Dowell, Richard
The relative impact of early intervention approach on speech perception and language skills was examined in these 3 well-matched groups of children using cochlear implants. Eight children from an auditory verbal intervention program were identified. From a pediatric database, researchers blind to the outcome data, identified 23 children from auditory oral programs and 8 children from bilingual-bicultural programs with the same inclusion criteria and equivalent demographic factors. All child participants were male, had congenital profound hearing loss (pure tone average >80 dBHL), no additional disabilities, were within the normal IQ range, were monolingual English speakers, had no unusual findings on computed tomography/magnetic resonance imaging, and received hearing aids and cochlear implants at a similar age and before 4 years of age. Open-set speech perception (consonant-nucleus-consonant [CNC] words and Bamford-Kowal-Bench [BKB] sentences) and the Peabody Picture Vocabulary Test (PPVT) were administered. The mean age at cochlear implant was 1.7 years (range, 0.8-3.9; SD, 0.7), mean test age was 5.4 years (range, 2.5-10.1; SD, 1.7), and mean device experience was 3.7 years (range, 0.7-7.9; SD, 1.8). Results indicate mean CNC scores of 60%, 43%, and 24% and BKB scores of 77%, 77%, and 56% for the auditory-verbal (AV), aural-oral (AO), and bilingual-bicultural (BB) groups, respectively. The mean PPVT delay was 13, 19, and 26 months for AV, AO, and BB groups, respectively. Despite equivalent child demographic characteristics at the outset of this study, by 3 years postimplant, there were significant differences in AV, AO, and BB groups. Results support consistent emphasis on oral/aural input to achieve optimum spoken communication outcomes for children using cochlear implants.
Bailey, Jennifer Anne; Zatorre, Robert J; Penhune, Virginia B
Evidence in animals and humans indicates that there are sensitive periods during development, times when experience or stimulation has a greater influence on behavior and brain structure. Sensitive periods are the result of an interaction between maturational processes and experience-dependent plasticity mechanisms. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 show enhancements in behavior and white matter structure compared with those who begin later. Plastic changes in white matter and gray matter are hypothesized to co-occur; therefore, the current study investigated possible differences in gray matter structure between early-trained (ET; 7) musicians, matched for years of experience. Gray matter structure was assessed using voxel-wise analysis techniques (optimized voxel-based morphometry, traditional voxel-based morphometry, and deformation-based morphometry) and surface-based measures (cortical thickness, surface area and mean curvature). Deformation-based morphometry analyses identified group differences between ET and LT musicians in right ventral premotor cortex (vPMC), which correlated with performance on an auditory motor synchronization task and with age of onset of musical training. In addition, cortical surface area in vPMC was greater for ET musicians. These results are consistent with evidence that premotor cortex shows greatest maturational change between the ages of 6-9 years and that this region is important for integrating auditory and motor information. We propose that the auditory and motor interactions required by musical practice drive plasticity in vPMC and that this plasticity is greatest when maturation is near its peak.
Junius, D.; Dau, Torsten
The present study investigates the relationship between evoked responses to transient broadband chirps and responses to the same chirps when embedded in longer-duration stimuli. It examines to what extent the responses to the composite stimuli can be explained by a linear superposition of the res...
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Prolonged response to calling songs by the L3 auditory interneuron in female crickets (Acheta domesticus): possible roles in regulating phonotactic threshold and selectiveness for call carrier frequency.
Bronsert, Michael; Bingol, Hilary; Atkins, Gordon; Stout, John
L3, an auditory interneuron in the prothoracic ganglion of female crickets (Acheta domesticus) exhibited two kinds of responses to models of the male's calling song (CS): a previously described, phasically encoded immediate response; a more tonically encoded prolonged response. The onset of the prolonged response required 3-8 sec of stimulation to reach its maximum spiking rate and 6-20 sec to decay once the calling song ceased. It did not encode the syllables of the chirp. The prolonged response was sharply selective for the 4-5 kHz carrier frequency of the male's calling songs and its threshold tuning matched the threshold tuning of phonotaxis, while the immediate response of the same neuron was broadly tuned to a wide range of carrier frequencies. The thresholds for the prolonged response covaried with the changing phonotactic thresholds of 2- and 5-day-old females. Treatment of females with juvenile hormone reduced the thresholds for both phonotaxis and the prolonged response by equivalent amounts. Of the 3 types of responses to CSs provided by the ascending L1 and L3 auditory interneurons, the threshold for L3's prolonged response, on average, best matched the same females phonotactic threshold. The prolonged response was stimulated by inputs from both ears while L3's immediate response was driven only from its axon-ipsilateral ear. The prolonged response was not selective for either the CS's syllable period or chirp rate.
Blom, Jan Dirk
Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.
Juan Carlos eAlvarado
Full Text Available An appropriate conditioning noise exposure may reduce a subsequent noise-induced threshold shift. Although this toughening effect helps to protect the auditory system from a subsequent traumatic noise exposure, the mechanisms that regulate this protective process are not fully understood yet. Accordingly, the goal of the present study was to characterize physiological processes associated with ‘toughening’ and to determine their relationship to metabolic changes in the cochlea and cochlear nucleus (CN. Auditory brainstem responses (ABR were evaluated in Wistar rats before and after exposures to a sound conditioning protocol consisting of a broad-band white noise of 118 dB SPL for 1h every 72h, 4 times. After the last ABR evaluation, animals were perfused and their cochleae and brains removed and processed for the activity markers calretinin (CR and neuronal nitric oxide synthase (nNOS. Toughening was demonstrated by a progressively faster recovery of the threshold shift, as well as wave amplitudes and latencies over time. Immunostaining revealed an increase in CR and nNOS levels in the spiral ganglion, spiral ligament and CN in noise-conditioned rats. Overall, these results suggest that the protective mechanisms of the auditory toughening effect initiate in the cochlea and extend to the central auditory system. Such phenomenon might be in part related to an interplay between CR and nitric oxide signalling pathways, and involve an increased cytosolic calcium buffering capacity induced by the noise conditioning protocol.
Full Text Available An objective, fast, and reasonably accurate assessment test that allows for easy interpretation of the responses of the hearing thresholds at all frequencies of a conventional audiogram is needed to resolve the medicolegal aspects of an occupational hearing injury. This study evaluated the use of dichotic multiple-frequency auditory steady-state responses (Mf-ASSR to predict the hearing thresholds in workers exposed to high levels of noise. The study sample included 34 workers with noise-induced hearing impairment. Thresholds of pure-tone audiometry (PTA and Mf-ASSRs at four frequencies were assessed. The differences and correlations between the thresholds of Mf-ASSRs and PTA were determined. The results showed that, on average, Mf-ASSR curves corresponded well with the thresholds of the PTA contours averaged across subjects. The Mf-ASSRs were 20±8 dB, 16±9 dB, 12±9 dB, and 11±12 dB above the thresholds of the PTA for 500 Hz, 1,000 Hz, 2,000 Hz, and 4,000 Hz, respectively. The thresholds of the PTA and the Mf-ASSRs were significantly correlated (r=0.77–0.89. We found that the measurement of Mf-ASSRs is easy and potentially time saving, provides a response at all dichotic multiple frequencies of the conventional audiogram, reduces variability in the interpretation of the responses, and correlates well with the behavioral hearing thresholds in subjects with occupational noise-induced hearing impairment. Mf-ASSR can be a valuable aid in the adjustment of compensation cases.
Full Text Available In both humans and rodents, decline in cognitive function is a hallmark of the aging process, the basis for this decrease has yet to be fully characterized. However, using aged rodent models, deficits in auditory processing have been associated with significant decreases in inhibitory signaling attributed to a loss of GABAergic interneurons. Not only are these interneurons crucial for pattern detection and other large-scale population dynamics, but they have also been linked to mechanisms mediating plasticity and learning, making them a prime candidate for study and modelling of modifications to cortical communication pathways in neurodegenerative diseases. Using the rat primary auditory cortex (A1 as a model, we probed the known markers of GABAergic interneurons with immunohistological methods, using antibodies against gamma aminobutyric acid (GABA, parvalbumin (PV, somatostatin (SOM, calretinin (CR, vasoactive intestinal peptide (VIP, choline acetyltransferase (ChAT, neuropeptide Y (NPY and cholecystokinin (CCK to document the changes observed in interneuron populations across the rat’s lifespan. This analysis provided strong evidence that several but not all GABAergic neurons were affected by the aging process, showing most dramatic changes in expression of parvalbumin (PV and somatostatin (SOM expression. With this evidence, we show how understanding these trajectories of cell counts may be factored into a simple model to quantify changes in inhibitory signalling across the course of life, which may be applied as a framework for creating more advanced simulations of interneuronal implication in normal cerebral processing, normal aging, or pathological processes.
We investigated variability of responses to emotionally important auditory stimulation in different groups of TBI (Traumatic Brain Injury) in acute state or recovery. The patients sampling consisted of three different groups: patients in coma or vegetative state, patients with Severe and Moderate TBI in recovery period. Subjects were stimulated with auditory stimuli containing important physiological sounds (coughing, vomiting), emotional sounds (laughing, crying), nature sounds (bird song, barking), unpleasant household sounds (nails scratching the glass), natural sounds (sea, rain, fire) and neutral sounds (white noise). The background encephalographic activity was registered during at least 7 minutes. EEG was recorded while using portable device "Entsefalan". Significant differences of power of the rhythmic activity registered during the presentation of different types of stimuli were analyzed using Mathlab and Statistica 6.0. Results showed that EEG-response to the emotional stimuli differed depending on consciousness level, stimuli type, severity of TBI. Most valuable changes in EEG spectrum power for a patient with TBI were found for unpleasant auditory stimulation. Responsiveness to the pleasant stimulation could be registered in later stages of coming out of coma than to unpleasant stimulation. Alpha-activity is reducing in patients with TBI: the alpha rhythm depression is most evident in the control group, less in group after moderate TBI, and even less in group after severe TBI. Patients in coma or vegetative state didn't show any response in rhythmic power in the frequency of alpha rhythm.
Abnormal auditory responses is one of the common features in autism. The characteristics and pathogenesis of the abnormality is not quite clear. The results of electrophysiological hearing evaluation ( i. e,brainstem auditory evoked potentials, otoacoustic emissions) in autistic children are inconsistent. The abnormal aduitory responses may contribute to the poor social interaction and communication in autism.%孤独症儿童常伴有一定程度的听觉反应异常,该异常的临床表现多样化,目前其发生机制尚不明确.脑干听觉诱发电位、耳声发射等相关辅助检查的结果也不尽一致,听觉反应异常可能影响孤独症儿童的社交及语言交流等整体发育.
Cho, Sung-Woo; Han, Kyu-Hee; Jang, Hyun-Kyung; Chang, Sun O; Jung, Hyunseo; Lee, Jun Ho
Evaluation of the characteristic differences between click-and CE-Chirp-evoked auditory brainstem responses (ABRs) in normal hearing and sensorineural hearing loss. A prospective study. Ears with normal hearing and with sensorineural hearing loss were evaluated. Pure-tone audiometry and click-and CE-Chirp evoked ABRs exams were conducted for all ears. Visual detection levels, wave-V amplitudes, and latencies of the ABRs were assessed. Twenty-two ears with normal hearing and 22 ears with sloping type sensorineural hearing loss were examined. In normal-hearing ears, mean amplitudes were larger for CE-chirps than for clicks at all intensities until 80 dB nHL, at which the amplitudes dropped off, presumably due to upward spread of excitation. In ears with sensorineural hearing loss, however the drop-off was less significant at 80 dB nHL. Comparisons with pure-tone audiometry findings revealed ABRs to CE-Chirps to correlate at 0.5, 1, 2, and 3 kHz, and to clicks at 1, 2, 3, and 4 kHz. The CE-Chirp has advantages over clicks for examining normal ears. However, under high-level stimulation, these advantages are no longer present. In ears with sensorineural hearing loss, the upward spread of excitation is less prominent. The CE-Chirps results correlate significantly to low frequency audiometric findings at 0.5 kHz, while clicks do not.
Cui, Jianguo; Zhu, Bicheng; Fang, Guangzhan; Smith, Ed; Brauth, Steven E.; Tang, Yezhong
Anesthesia is known to affect the auditory brainstem response (ABR) in mice, rats, birds and lizards. The present study investigated how the level of anesthesia affects ABR recordings in an amphibian species, Babina daunchina. To do this, we compared ABRs evoked by tone pip stimuli recorded from 35 frogs when Tricaine methane sulphonate (MS-222) anesthetic immersion times varied from 0, 5 and 10 minutes after anesthesia induction at sound frequencies between 0.5 and 6 kHz. ABR thresholds increased significantly with immersion time across the 0.5 kHz to 2.5 kHz frequency range, which is the most sensitive frequency range for hearing and the main frequency range of male calls. There were no significant differences for anesthetic levels across the 3 kHz to 6 kHz range. ABR latency was significantly longer in the 10 min group than in the 0 and 5 min groups at frequencies of 0.5, 1.0, 1.5, 2.5 kHz, while ABR latency did not differ across the 3 kHz to 4 kHz range and at 2.0 kHz. Taken together, these results show that the level of anesthesia affects the amplitude, threshold and latency of ABRs in frogs. PMID:28056042
Aihara, Noritaka; Murakami, Shingo; Takahashi, Mariko; Yamada, Kazuo
We classified the results of preoperative auditory brainstem response (ABR) in 121 patients with useful hearing and considered the utility of preoperative ABR as a preliminary assessment for intraoperative monitoring. Wave V was confirmed in 113 patients and was not confirmed in 8 patients. Intraoperative ABR could not detect wave V in these 8 patients. The 8 patients without wave V were classified into two groups (flat and wave I only), and the reason why wave V could not be detected may have differed between the groups. Because high-frequency hearing was impaired in flat patients, an alternative to click stimulation may be more effective. Monitoring cochlear nerve action potential (CNAP) may be useful because CNAP could be detected in 4 of 5 wave I only patients. Useful hearing was preserved after surgery in 1 patient in the flat group and 2 patients in wave I only group. Among patients with wave V, the mean interaural latency difference of wave V was 0.88 ms in Class A (n = 57) and 1.26 ms in Class B (n = 56). Because the latency of wave V is already prolonged before surgery, to estimate delay in wave V latency during surgery probably underestimates cochlear nerve damage. Recording intraoperative ABR is indispensable to avoid cochlear nerve damage and to provide information for surgical decisions. Confirming the condition of ABR before surgery helps to solve certain problems, such as choosing to monitor the interaural latency difference of wave V, CNAP, or alternative sound-evoked ABR.
Money, M K; Pippin, G W; Weaver, K E; Kirsch, J P; Webster, D B
Exogenous administration of GM1 ganglioside to CBA/J mice with a neonatal conductive hearing loss ameliorates the atrophy of spiral ganglion neurons, ventral cochlear nucleus neurons, and ventral cochlear nucleus volume. The present investigation demonstrates the extent of a conductive loss caused by atresia and tests the hypothesis that GM1 ganglioside treatment will ameliorate the conductive hearing loss. Auditory brainstem responses were recorded from four groups of seven mice each: two groups received daily subcutaneous injections of saline (one group had normal hearing; the other had a conductive hearing loss); the other two groups received daily subcutaneous injections of GM1 ganglioside (one group had normal hearing; the other had a conductive hearing loss). In mice with a conductive loss, decreases in hearing sensitivity were greatest at high frequencies. The decreases were determined by comparing mean ABR thresholds of the conductive loss mice with those of normal hearing mice. The conductive hearing loss induced in the mice in this study was similar to that seen in humans with congenital aural atresias. GM1 ganglioside treatment had no significant effect on ABR wave I thresholds or latencies in either group.
Magdesian, K Gary; Williams, D Colette; Aleman, Monica; Lecouteur, Richard A; Madigan, John E
To evaluate deafness in American Paint Horses by phenotype, clinical findings, brainstem auditory-evoked responses (BAERs), and endothelin B receptor (EDNBR) genotype. Case series and case-control studies. 14 deaf American Paint Horses, 20 suspected-deaf American Paint Horses, and 13 nondeaf American Paint Horses and Pintos. Horses were categorized on the basis of coat color pattern and eye color. Testing for the EDNBR gene mutation (associated with overo lethal white foal syndrome) and BAERs was performed. Additional clinical findings were obtained from medical records. All 14 deaf horses had loss of all BAER waveforms consistent with complete deafness. Most horses had the splashed white or splashed white-frame blend coat pattern. Other patterns included frame overo and tovero. All of the deaf horses had extensive head and limb white markings, although the amount of white on the neck and trunk varied widely. All horses had at least 1 partially heterochromic iris, and most had 2 blue eyes. Ninety-one percent (31/34) of deaf and suspected-deaf horses had the EDNBR gene mutation. Deaf and suspected-deaf horses were used successfully for various performance events. All nondeaf horses had unremarkable BAER results. Veterinarians should be aware of deafness among American Paint Horses, particularly those with a splashed white or frame overo coat color pattern, blend of these patterns, or tovero pattern. Horses with extensive head and limb markings and those with blue eyes appeared to be at particular risk.
Chintanpalli, Ananthakrishna; Jennings, Skyler G; Heinz, Michael G; Strickland, Elizabeth A
The medial olivocochlear reflex (MOCR) has been hypothesized to provide benefit for listening in noise. Strong physiological support for an anti-masking role for the MOCR has come from the observation that auditory nerve (AN) fibers exhibit reduced firing to sustained noise and increased sensitivity to tones when the MOCR is elicited. The present study extended a well-established computational model for normal-hearing and hearing-impaired AN responses to demonstrate that these anti-masking effects can be accounted for by reducing outer hair cell (OHC) gain, which is a primary effect of the MOCR. Tone responses in noise were examined systematically as a function of tone level, noise level, and OHC gain. Signal detection theory was used to predict detection and discrimination for different spontaneous rate fiber groups. Decreasing OHC gain decreased the sustained noise response and increased maximum discharge rate to the tone, thus modeling the ability of the MOCR to decompress AN fiber rate-level functions. Comparing the present modeling results with previous data from AN fibers in decerebrate cats suggests that the ipsilateral masking noise used in the physiological study may have elicited up to 20 dB of OHC gain reduction in addition to that inferred from the contralateral noise effects. Reducing OHC gain in the model also extended the dynamic range for discrimination over a wide range of background noise levels. For each masker level, an optimal OHC gain reduction was predicted (i.e., where maximum discrimination was achieved without increased detection threshold). These optimal gain reductions increased with masker level and were physiologically realistic. Thus, reducing OHC gain can improve tone-in-noise discrimination even though it may produce a “hearing loss” in quiet. Combining MOCR effects with the sensorineural hearing loss effects already captured by this computational AN model will be beneficial for exploring the implications of their interaction
Durand-Rivera, A; Gonzalez-Pina, R; Hernandez-Godinez, B; Ibanez-Contreras, A; Bueno-Nava, A; Alfaro-Rodriguez, A
We describe two clinical cases and examine the effects of piracetam on the brainstem auditory response in infantile female rhesus monkeys (Macaca mulatta). We found that the interwave intervals show a greater reduction in a 3-year-old rhesus monkey compared to a 1-year-old rhesus monkey. In this report, we discuss the significance of these observations. © 2012 John Wiley & Sons A/S.
Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul
Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.
Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J
Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how
鄢慧琴; 王海涛; 周枫; 黄利芬
Objective To compare the difference of the threshold of normal guinea pigs between auditory brainstem response (ABR) and auditory steady state response (ASSR), and provide a theoretical basis for hearing research on guinea pigs.Methods 12 guinea pigs (24 ears) with normal hearing sedated by pentobarbital sodium were selected for the ABR and ASSR tests.The click- ABR stimulation rate was 11.1 beats/s, and the Ⅱ wave thresholds were recorded as the ABR response threshold.The carrier frequencies (CF) of ASSR were 0.5,1,2,3,4 and 6 kHz, the modulation frequency (MF) was 154 Hz, and the threshold of each carrier frequency was recorded.Results The threshold of ASSR was higher than that of ABR at 0.5-4 kHz CF in normal guinea pigs( P ＜0.01 ), but no significant difference was found at 6 kHz CF(P ＞0.05).Conclusion There are significant differences in most thresholds of ASSR and ABR in normal guinea pigs.However, there is no significant difference at 6 kHz CF.Therefore, the differences caused by different CF of ASSR must be noted when assessing the heating of guinea pigs.%目的 比较正常豚鼠听性脑干反应(ABR)和听性稳态反应(ASSR)阈值的差异,为利用豚鼠进行听力学研究提供理论依据.方法 选正常听力豚鼠12只(24耳),在戊巴比妥钠镇静状态下,分别行ABR和ASSR测试.ABR为click刺激声,刺激率为11.1次/s,记录ABR的Ⅱ渡反应阈值.ASSR载波频率(CF)为0.5、1、2、3、4、6 kHz,调制频率(MF)为154 Hz,记录各载频的反应阅值.结果 正常豚鼠ASSR反应阈值高于ABR反应阈值,CF:0.5.4 kHz时.ABR与ASSR阈值间有统计学差异(P0.05).结论 正常豚鼠ABR与ASSR阈值间存在较大差值,但ABR与6 kHz的ASSR阈值间无显著差异.故对豚鼠进行听阈评估时,要注意两者间由于ASSR载波频率不同所引起的差异.
Full Text Available Johan Källstrand,1 Tommy Lewander,2 Eva Baghdassarian,2,3 Sören Nielzén4 1SensoDetect AB, Lund, 2Department of Neuroscience, Medical Faculty, Uppsala University, 3Department of Psychiatry, Uppsala University Hospital, Uppsala, 4Department of Psychiatry, Medical Faculty, University of Lund, Lund, Sweden Abstract: The auditory brain-stem response (ABR waveform comprises a set of waves (labeled I–VII recorded with scalp electrodes over 10 ms after an auditory stimulation with a brief click sound. Quite often, the waves are fused (confluent and baseline-irregular and sloped, making wave latencies and wave amplitudes difficult to establish. In the present paper, we describe a method, labeled moving-minimum subtraction, based on digitization of the analog ABR waveform (154 data points/ms in order to achieve alignment of the ABR response to a straight baseline, often with clear baseline separation of waves and resolution of fused waves. Application of the new method to groups of patients showed marked differences in ABR waveforms between patients with schizophrenia versus patients with adult attention deficit/hyperactivity disorder versus healthy controls. The findings show promise regarding the possibility to identify ABR markers to be used as biomarkers as support for clinical diagnoses of these and other neuropsychiatric disorders. Keywords: auditory brain-stem response, digitization, moving-minimum subtraction method, baseline alignment, schizophrenia, ADHD
Fobel, Oliver; Dau, Torsten
functions fitted to tone-burst-evoked ABR wave-V data over a wide range of stimulus levels and frequencies [Neely et al., J. Acoust. Soc. Am. 83(2), 652–656 (1988)]. In this case, a set of level-dependent chirps was generated. The chirp-evoked responses, particularly wave-V amplitude and latency, were...... compared to click responses and to responses obtained with the original chirp as defined in Dau et al. [J. Acoust. Soc. Am. 107(3), 1530–1540 (2000)], referred to here as the M-chirp since it is based on a (linear) cochlea model. The main hypothesis was that, at low and medium stimulation levels, the O......- and A-chirps might produce a larger response than the original M-chirp whose parameters were essentially derived from high-level BM data. The main results of the present study are as follows: (i) All chirps evoked a larger wave-V amplitude than the click stimulus indicating that for the chirps a broader...
Poulsen, Catherine; Picton, Terence W.; Paus, Tomas
Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…
Poulsen, Catherine; Picton, Terence W.; Paus, Tomas
Maturational changes in the capacity to process quickly the temporal envelope of sound have been linked to language abilities in typically developing individuals. As part of a longitudinal study of brain maturation and cognitive development during adolescence, we employed dense-array EEG and spatiotemporal source analysis to characterize…
Ozdek, Ali; Karacay, Mahmut; Saylam, Guleser; Tatar, Emel; Aygener, Nurdan; Korkmaz, Mehmet Hakan
The objective of this study is to compare pure tone audiometry and auditory steady-state response (ASSR) thresholds in normal hearing (NH) subjects and subjects with hearing loss. This study involved 23 NH adults and 38 adults with hearing loss (HI). After detection of behavioral thresholds (BHT) with pure tone audiometry, each subject was tested for ASSR responses in the same day. Only one ear was tested for each subject. The mean pure tone average was 9 ± 4 dB for NH group and 57 ± 14 for HI group. There was a very strong correlation between BHT and ASSR measurements in HI group. However, the correlation was weaker in the NH group. The mean differences of pure tone average of four frequencies (0.5, 1, 2, and 4 kHz) and ASSR threshold average of same frequencies were 13 ± 6 dB in NH group and 7 ± 5 dB in HI group and the difference was significant (P = 0.01). It was found that 86% of threshold difference values were less than 20 dB in NH group and 92% of threshold difference values were less than 20 dB in HI group. In conclusion, ASSR thresholds can be used to predict the configuration of pure tone audiometry. Results are more accurate in HI group than NH group. Although ASSR can be used in cochlear implant decision-making process, findings do not permit the utilization of the test for medico-legal reasons.
Ramamurthy, Deepa L; Recanzone, Gregg H
The mammalian auditory cortex is necessary for spectral and spatial processing of acoustic stimuli. Most physiological studies of single neurons in the auditory cortex have focused on the onset and sustained portions of evoked responses, but there have been far fewer studies on the relationship between onset and offset responses. In the current study, we compared spectral and spatial tuning of onset and offset responses of neurons in primary auditory cortex (A1) and the caudolateral (CL) belt area of awake macaque monkeys. Several different metrics were used to determine the relationship between onset and offset response profiles in both frequency and space domains. In the frequency domain, a substantial proportion of neurons in A1 and CL displayed highly dissimilar best stimuli for onset- and offset-evoked responses, though even for these neurons, there was usually a large overlap in the range of frequencies that elicited onset and offset responses and distributions of tuning overlap metrics were mostly unimodal. In the spatial domain, the vast majority of neurons displayed very similar best locations for onset- and offset-evoked responses, along with unimodal distributions of all tuning overlap metrics considered. Finally, for both spectral and spatial tuning, a slightly larger fraction of neurons in A1 displayed non-overlapping onset and offset response profiles, relative to CL, which supports hierarchical differences in the processing of sounds in the two areas. However, these differences are small compared to differences in proportions of simple cells (low overlap) and complex cells (high overlap) in primary and secondary visual areas.
Sersen, E A; Majkowski, J; Clausen, J; Heaney, G M
BAERs from 16 subjects during 3 sessions varied in the latency or amplitude of some components depending upon level of arousal as indicated by EEG patterns. There was a general tendency for activation to produce the fastest responses with the largest amplitudes and for drowsiness to produce the slowest responses with the smallest amplitudes. The latency of P2 was significantly prolonged during drowsiness, relative to those during relaxation or activation. For right-ear stimulation, P5 latency was longest during drowsiness, and shortest during activation while for left-ear stimulation the shortest latency occurred during relaxation. The amplitudes of Wave II and Wave VII were significantly smaller during drowsiness than during activation. Although the differences were below the level of clinical significance, the data indicate a modification in the characteristics of brainstem transmission as a function of concurrent activity in other brain areas.
Comparative Study between the use of Melatonin and A Solution with Melatonin, Tryptophan, and Vitamin B6 as an Inducer of Spontaneous Sleep in Children During an Auditory Response Test: An Alternative to Commonly Used Sedative Drugs.
Volpe, Antonio Della; Lucia, Antonietta De; Pirozzi, Clementina; Pastore, Vincenzo
An elective investigation into the early diagnosis of deafness in children under the age of 4-5 years is performed using auditory evoked potentials of auditory brainstem responses (ABRs). In case of pediatric patients, the major difficulty includes being examined during spontaneous sleep, which is complicated to obtain, especially in the age range of 12 to 72 months. Recently, melatonin has been used as a "sleep inducer" in diagnostic tests with positive results. Our aim was to evaluate the use of melatonin and of a solution containing melatonin, tryptophan, and vitamin B6 as an inducer of spontaneous sleep on repeated ABR analyses as well as to evaluate the reduction in analyses with sedative drugs in case of uncooperative patients. In total, 748 children aged between 12 and 48 months were included in the study and divided into three groups: A: placebo (n=235), B: melatonin (n=246), and C: melatonin, tryptophan, and vitamin B6 (n=267). In groups B and C, in addition to physiological awakening, we observed a significant reduction in the number of repeated analyses as well as drug regimen usage. This study confirms the strategic role of melatonin as an inducer of spontaneous sleep. However, above all, it suggests that the administration of a solution containing melatonin, tryptophan, and vitamin B6 significantly reduces the number of repeated ABR examinations as well as the percentage of repeated analysis performed using sedative drugs compared to both the control group and the melatonin-only group.
Full Text Available Background and Aim: Among all auditory assessment tools, auditory steady state response (ASSR is a modern test. Modulation frequency for this test is usually 80 Hz. The purpose of this study, was to examined adult subjects with 40 Hz and 80 Hz ASSR and compare the results.Materials and Methods: Thirty adult (60 ears were evaluated by ASSR and PTA test, Results were divided into three groups: normal hearing, mild and moderate sensorineural hearing loss. Results: In all groups, forty hertz ASSR thresholds were relatively closer to behavioral threshold than those of 80 Hz ASSR(p<0.05. Besides, the more severe hearing loss, the lower the difference between those two thresholds. Correlation coefficients were also higher in 40 Hz ASSR(p<0.05. Conclusion: Frequency modulation thresholds with 40 Hz are more likely to be closer to the behavioral thresholds. Moreover, it has better results than the thresholds with 80 Hz.
Full Text Available The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.
Full Text Available BACKGROUND: In auditory fear conditioning, repeated presentation of the tone in the absence of shock leads to extinction of the acquired fear responses. The glutamate N-methyl-D-aspartate receptor (NMDAR is thought to be involved in the extinction of the conditioned fear responses, but its detailed role in initiating and consolidating or maintaining the fear extinction memory is unclear. Here we investigated this issue by using a NMDAR antagonist, MK-801. METHODS/MAIN FINDINGS: The effects of immediate (beginning at 10 min after the conditioning and delayed (beginning at 24 h after conditioning extinctions were first compared with the finding that delayed extinction caused a better and long-lasting (still significant on the 20(th day after extinction depression on the conditioned fear responses. In a second experiment, MK-801 was intraperitoneally (i.p. injected at 40 min before, 4 h or 12 h after the delayed extinction, corresponding to critical time points for initiating, consolidating or maintaining the fear extinction memory. i.p. injection of MK-801 at either 40 min before or 4 h after delayed extinction resulted in an impairment of initiating and consolidating fear extinction memory, which caused a long lasting increased freezing score that was still significant on the 7th day after extinction, compared with extinction group. However, MK-801 administered at 12 h after the delayed extinction, when robust consolidation has been occurred and stabilized, did not affect the established extinction memory. Furthermore, the changed freezing behaviors was not due to an alteration in general anxiety levels, since MK-801 treatment had no effect on the percentage of open-arm time or open-arm entries in an Elevated Plus Maze (EPM task. CONCLUSIONS/SIGNIFICANCE: Our data suggested that the activation of NMDARs plays important role in initiation and consolidation but not maintenance of fear extinction memory. Together with the fact that NMDA receptor is
Keller, C H; Takahashi, T T
Summing localization describes the perceptions of human listeners to two identical sounds from different locations presented with delays of 0-1 msec. Usually a single source is perceived to be located between the two actual source locations, biased toward the earlier source. We studied neuronal responses within the space map of the barn owl to sounds presented with this same paradigm. The owl's primary cue for localization along the azimuth, interaural time difference (ITD), is based on a cross-correlation-like treatment of the signals arriving at each ear. The output of this cross-correlation is displayed as neural activity across the auditory space map in the external nucleus of the owl's inferior colliculus. Because the ear input signals reflect the physical summing of the signals generated by each speaker, we first recorded the sounds at each ear and computed their cross-correlations at various interstimulus delays. The resulting binaural cross-correlation surface strongly resembles the pattern of activity across the space map inferred from recordings of single space-specific neurons. Four peaks are observed in the cross-correlation surface for any nonzero delay. One peak occurs at the correlation delay equal to the ITD of each speaker. Two additional peaks reflect "phantom sources" occurring at correlation delays that match the signal of the left speaker in one ear with the signal of the right speaker in the other ear. At zero delay, the two phantom peaks coincide. The surface features are complicated further by the interactions of the various correlation peaks.
Burkard, R.; Jones, S.; Jones, T.
Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).
Shiomi, M.; Ookuni, H.; Sugita, T.
A family in which 5 males in successive generations were clinically suspected to be affected with the classical X-linked recessive form of Pelizaeus-Merzbacher disease (PMD) is presented. Two brothers and their maternal uncle were examined by one of the authors (MS). In two brothers, aged 3 years and 2 years, the disease became obvious within a month after birth with nystagmus and head tremor. Head control and sitting were achieved at the age of 18 months at which time they began to speak. They could not stand nor walk without support. They had dysmetria, weakness and hyper-reflexia of lower extremities, and mild mental retardation. Their maternal uncle, aged 37 years, showed psychomotor retardation from birth and subsequently developed spastic paraplegia. He had been able to walk with crutches until adolescence. He had dysmetria, scanning speech, athetoid posture of fingers and significant intellectual deficits. Auditory brainstem response in both brothers revealed well defined waves I and II, low amplitude wave III and an absence of all subsequent components. CT demonstrated mild cerebral atrophy in the elder brother and was normal in the younger brother, but in their uncle, CT showed atrophy of the brainstem, cerebellum and cerebrum, and low density of the white matter of the centrum semiovale. MRI was performed in both brothers. Although the brainstem, the internal capsule and the thalamus were myelinated, the myelination in the subcortical white matter was restricted to periventricular regions on IR sequence scans. On SE sequence, the subcortical white matter was imaged as a brighter area than the cerebral cortex. These results demonstrate that the degree of myelination in these patients was roughly equal to that of 3-to 6-month old infants.
Bram Van Dun
Full Text Available
Background: Cortical auditory evoked potentials (CAEPs are an emerging tool for hearing aid fitting evaluation in young children who cannot provide reliable behavioral feedback. It is therefore useful to determine the relationship between the sensation level of speech sounds and the detection sensitivity of CAEPs.
Design and methods: Twenty-five sensorineurally hearing impaired infants with an age range of 8 to 30 months were tested once, 18 aided and 7 unaided. First, behavioral thresholds of speech stimuli /m/, /g/, and /t/ were determined using visual reinforcement orientation audiometry (VROA. Afterwards, the same speech stimuli were presented at 55, 65, and 75 dB SPL, and CAEP recordings were made. An automatic statistical detection paradigm was used for CAEP detection.
Results: For sensation levels above 0, 10, and 20 dB respectively, detection sensitivities were equal to 72 ± 10, 75 ± 10, and 78 ± 12%. In 79% of the cases, automatic detection p-values became smaller when the sensation level was increased by 10 dB.
Conclusions: The results of this study suggest that the presence or absence of CAEPs can provide some indication of the audibility of a speech sound for infants with sensorineural hearing loss. The detection of a CAEP provides confidence, to a degree commensurate with the detection probability, that the infant is detecting that sound at the level presented. When testing infants where the audibility of speech sounds has not been established behaviorally, the lack of a cortical response indicates the possibility, but by no means a certainty, that the sensation level is 10 dB or less.
Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.
In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...... behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber...
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of
Full Text Available Recently, a growing number of studies have investigated the cues used by children to selectively accept testimony. In parallel, several studies with adults have shown that the fluency with which information is provided influences message evaluation: adults evaluate fluent information as more credible than dysfluent information. It is therefore plausible that the fluency of a message could also influence children’s endorsement of statements. Two experiments were designed to test this hypothesis with 3 to 5-year-olds where the auditory fluency of a message was manipulated by adding different levels of noise to recorded statements. The results show that 4 and 5-year-old children, but not 3-year-olds, are more likely to endorse a fluent statement than a dysfluent one. The present study constitutes a first attempt to show that fluency, i.e., ease of processing, is recruited as a cue to guide epistemic decision in children. An interpretation of the age difference based on the way cues are processed by younger children is suggested.
Pidgeon, Nick; Corner, Adam; Parkhill, Karen; Spence, Alexa; Butler, Catherine; Poortinga, Wouter
Proposals for geoengineering the Earth's climate are prime examples of emerging or 'upstream' technologies, because many aspects of their effectiveness, cost and risks are yet to be researched, and in many cases are highly uncertain. This paper contributes to the emerging debate about the social acceptability of geoengineering technologies by presenting preliminary evidence on public responses to geoengineering from two of the very first UK studies of public perceptions and responses. The discussion draws upon two datasets: qualitative data (from an interview study conducted in 42 households in 2009), and quantitative data (from a subsequent nationwide survey (n=1822) of British public opinion). Unsurprisingly, baseline awareness of geoengineering was extremely low in both cases. The data from the survey indicate that, when briefly explained to people, carbon dioxide removal approaches were preferred to solar radiation management, while significant positive correlations were also found between concern about climate change and support for different geoengineering approaches. We discuss some of the wider considerations that are likely to shape public perceptions of geoengineering as it enters the media and public sphere, and conclude that, aside from technical considerations, public perceptions are likely to prove a key element influencing the debate over questions of the acceptability of geoengineering proposals.
Parmiani, Giorgio; Maccalli, Cristina
Early events responsible of tumor growth in patients with a normal immune system are poorly understood. Here, we discuss, in the context of human melanoma, the Prehn hypothesis according to which a weak antitumor immune response may be required for tumor growth before weakly or non-immunogenic tumor cell subpopulations are selected by the immune system.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E
The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.
Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.
Huckins, Sean C.; Turner, Christopher W.; Doherty, Karen A.; Fonte, Michael M.; Szeverenyi, Nikolaus M.
This study examined the feasibility of using functional magnetic resonance imaging (fMRI) in auditory research by testing the reliability of scanning parameters using high resolution and high signal-to-noise ratios. Findings indicated reproducibility within and across listeners for consonant-vowel speech stimuli and reproducible results within and…
Verhaegen, V.J.O.; Mulder, J.J.S.; Noten, J.F.P.; Luijten, B.M.A.; Cremers, C.W.R.J.; Snik, A.F.M.
OBJECTIVE: To optimize intraoperatively the coupling of the floating mass transducer (FMT) of the Vibrant Soundbridge middle ear implant to the round or oval cochlear window in patients with mixed hearing loss. STUDY DESIGN: Intraoperative measurement of objective hearing thresholds using auditory s
Larsen, Kit Melissa; Pellegrino, Giovanni; Birknow, Michelle Rosgaard
carriers (ρ = -0.487, P = .041). Nonpsychotic 22q11.2 deletion carriers lack efficient phase locking of evoked gamma activity to regular 40 Hz auditory stimulation. This abnormality indicates a dysfunction of fast intracortical oscillatory processing in the gamma-band. Since ASSR was attenuated...
Full Text Available Abstract Background While dengue-elicited early and transient host responses preceding defervescence could shape the disease outcome and reveal mechanisms of the disease pathogenesis, assessment of these responses are difficult as patients rarely seek healthcare during the first days of benign fever and thus data are lacking. Methods In this study, focusing on early recruitment, we performed whole-blood transcriptional profiling on denguevirus PCR positive patients sampled within 72 h of self-reported fever presentation (average 43 h, SD 18.6 h and compared the signatures with autologous samples drawn at defervescence and convalescence and to control patients with fever of other etiology. Results In the early dengue fever phase, a strong activation of the innate immune response related genes were seen that was absent at defervescence (4-7 days after fever debut, while at this second sampling genes related to biosynthesis and metabolism dominated. Transcripts relating to the adaptive immune response were over-expressed in the second sampling point with sustained activation at the third sampling. On an individual gene level, significant enrichment of transcripts early in dengue disease were chemokines CCL2 (MCP-1, CCL8 (MCP-2, CXCL10 (IP-10 and CCL3 (MIP-1α, antimicrobial peptide β-defensin 1 (DEFB1, desmosome/intermediate junction component plakoglobin (JUP and a microRNA which may negatively regulate pro-inflammatory cytokines in dengue infected peripheral blood cells, mIR-147 (NMES1. Conclusions These data show that the early response in patients mimics those previously described in vitro, where early assessment of transcriptional responses has been easily obtained. Several of the early transcripts identified may be affected by or mediate the pathogenesis and deserve further assessment at this timepoint in correlation to severe disease.
Perälä, Mia-Maria; Kajantie, Eero; Valsta, Liisa M
Strong epidemiological evidence suggests that slow prenatal or postnatal growth is associated with an increased risk of CVD and other metabolic diseases. However, little is known whether early growth affects postprandial metabolism and, especially, the appetite regulatory hormone system. Therefore......, we investigated the impact of early growth on postprandial appetite regulatory hormone responses to two high-protein and two high-fat content meals. Healthy, 65-75-year-old volunteers from the Helsinki Birth Cohort Study were recruited; twelve with a slow increase in BMI during the first year of life......, early growth may have a role in programming appetite regulatory hormone secretion in later life. Slow early growth is also associated with higher postprandial insulin and TAG responses but not with incretin levels....
Gonzalez-Heydrich, Joseph; Enlow, Michelle Bosquet; D’Angelo, Eugene; Seidman B, Larry J.; Gumlak, Sarah; Kim, April; Woodberry, Kristen A.; Rober, Ashley; Tembulkar, Sahil; Graber, Kelsey; O’Donnell, Kyle; Hamoda, Hesham M.; Kimball, Kara; Rotenberg, Alexander; Oberman, Lindsay M.; Pascual-Leone, Alvaro; Keshavan, Matcheri S.; Duffy, Frank H.
Background The N100 is a negative deflection in the surface EEG approximately 100ms after an auditory signal. It has been shown to be reduced in individuals with schizophrenia and those at clinical high risk (CHR). N100 blunting may index neural network dysfunction underlying psychotic symptoms. This phenomenon has received little attention in pediatric populations. Method This cross-sectional study compared the N100 response measured via the average EEG response at the left medial frontal position FC1 to 150 sinusoidal tones in participants ages 5 to 17 years with a CHR syndrome (n = 29), a psychotic disorder (n = 22), or healthy controls (n=17). Results Linear regression analyses that considered potential covariates (age, gender, handedness, family mental health history, medication usage) revealed decreasing N100 amplitude with increasing severity of psychotic symptomatology from healthy to CHR to psychotic level. Conclusions Longitudinal assessment of the N100 in CHR children who do and do not develop psychosis will inform whether it predicts transition to psychosis and if its response to treatment predicts symptom change. PMID:26549629
Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Happel, Max F. K.; Ohl, Frank W.
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Lenita da Silva Quevedo
Full Text Available A ototoxidade dos solventes orgânicos pode atingir o sistema auditivo a nível coclear e retrococlear. OBJETIVO: Avaliar a integridade neurofisiológica do sistema auditivo até tronco cerebral por meio do PEATE. MÉTODO: Estudo prospectivo. Estudados frentistas de três postos de gasolina da cidade de Santa Maria/RS. A amostra ficou composta por 21 sujeitos, que foram avaliados por meio de potenciais evocados auditivos de tronco encefálico. RESULTADOS: Alteração nas latências absolutas das ondas I e III e em todas as latências interpicos, na orelha direita. Na orelha esquerda houve alteração na latência absoluta de todas as ondas, e em todos os intervalos interpicos. Alteração na diferença interaural da onda V foi verificada em 19% dos sujeitos. No grupo exposto há mais de cinco anos, foram estatisticamente significantes o número de sujeitos com alteração: no intervalo interpico I-V da orelha direita; na latência absoluta da onda I e no intervalo interpico III-V da orelha esquerda. CONCLUSÃO: A exposição a combustíveis pode causar alterações no sistema auditivo central.Ototoxicity of organic solvents can affect the hearing system up to the cochlea level and the central structures of hearing. OBJECTIVE: To evaluate the neurophysiological integrity of the hearing system in subjects exposed to fuels using ABR. METHOD: Prospective study. We evaluated attendants from three gas stations in Santa Maria/RS. The sample had 21 subjects, who were evaluated by auditory brainstem response. RESULTS: We found an alteration in the absolute latencies of Waves I and III and in all the interpeak latencies, in the right ear. In the left ear there was a change in the absolute latencies of all Waves, and in all the interpeak intervals. A change in the interaural difference of Wave V was found in 19% of the individuals. In the group exposed for more than five years, there were subjects with a statistically significant changes: in the I
Full Text Available Cochlear implants (CIs are neural prostheses that have been used routinely in the clinic over the past 25 years. They allow children who were born profoundly deaf, as well as adults affected by hearing loss for whom conventional hearing aids are insufficient, to attain a functional level of hearing. The "modern" CI (i.e., a multi-electrode implant using sequential coding strategies has yielded good speech comprehension outcomes (recognition level for monosyllabic words about 50% to 60%, and sentence comprehension close to 90%. These good average results however hide a very important interindividual variability as scores in a given patients' population often vary from 5 to 95% in comparable testing conditions. Our aim was to develop a prognostic model for patients with unilateral CI. A novel method of objectively measuring electrical and neuronal interactions using electrical auditory brainstem responses (eABRs is proposed.The method consists of two measurements: 1 eABR measurements with stimulation by a single electrode at 70% of the dynamic range (four electrodes distributed within the cochlea were tested, followed by a summation of these four eABRs; 2 Measurement of a single eABR with stimulation from all four electrodes at 70% of the dynamic range. A comparison of the eABRs obtained by these two measurements, defined as the monaural interaction component (MIC, indicated electrical and neural interactions between the stimulation channels. Speech recognition performance without lip reading was measured for each patient using a logatome test (64 "vowel-consonant-vowel"; VCV; by forced choice of 1 out of 16. eABRs were measured in 16 CI patients (CIs with 20 electrodes, Digisonic SP; Oticon Medical ®, Vallauris, France. Significant correlations were found between speech recognition performance and the ratio of the amplitude of the V wave of the eABRs obtained with the two measurements (Pearson's linear regression model, parametric correlation: r
Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Botte, M C; Chocholle, R
The auditory-evoked responses have been recorded on 5 subject by vertex, right temporal and left temporal electrodes simultaneously. 30 dB sensation level clicks were used as stimuli; one click was presented only to the right ear, or one click only to the left ear, or one click to the right ear and another click to the left ear with a variable interaural time difference in this latter case (0-150 ms). The N-P amplitude variations and the N and P latency variations have been studied and compared to those observed in the perceived lateralizations of the sound source.
Julia A Mossbridge
Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
Franken, M.K.M.; Hagoort, P.; Acheson, D.J.
Models of speech production explain event-related suppression of the auditory cortical response as reflecting a comparison between auditory predictions and feedback. The present MEG study was designed to test two predictions from this framework: (1) whether the reduced auditory response varies as a
Lieslehto, Johannes; Kiviniemi, Vesa; Mäki, Pirjo; Koivukangas, Jenni; Nordström, Tanja; Miettunen, Jouko; Barnett, Jennifer H; Jones, Peter B; Murray, Graham K; Moilanen, Irma; Paus, Tomáš; Veijola, Juha
Early stressors play a key role in shaping interindividual differences in vulnerability to various psychopathologies, which according to the diathesis-stress model might relate to the elevated glucocorticoid secretion and impaired responsiveness to stress. Furthermore, previous studies have shown that individuals exposed to early adversity have deficits in emotion processing from faces. This study aims to explore whether early adversities associate with brain response to faces and whether this association might associate with the regional variations in mRNA expression of the glucocorticoid receptor gene (NR3C1). A total of 104 individuals drawn from the Northern Finland Brith Cohort 1986 participated in a face-task functional magnetic resonance imaging (fMRI) study. A large independent dataset (IMAGEN, N = 1739) was utilized for reducing fMRI data-analytical space in the NFBC 1986 dataset. Early adversities were associated with deviant brain response to fearful faces (MANCOVA, P = 0.006) and with weaker performance in fearful facial expression recognition (P = 0.01). Glucocorticoid receptor gene expression (data from the Allen Human Brain Atlas) correlated with the degree of associations between early adversities and brain response to fearful faces (R(2) = 0.25, P = 0.01) across different brain regions. Our results suggest that early adversities contribute to brain response to faces and that this association is mediated in part by the glucocorticoid system. Hum Brain Mapp 38:4470-4478, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The sensory channel of presentation alters subjective ratings and autonomic responses toward disgusting stimuli – Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli
Croy, Ilona; Laqua, Kerstin; Süß, Frank; Joraschky, Peter; Ziemssen, Tjalf; Hummel, Thomas
Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors, or tactile stimuli. Therefore, disgust experience evoked by four different sensory channels was compared. A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile, and olfactory channel. Ratings ...
Maliheh Mazaher Yazdi
Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.
Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of