WorldWideScience

Sample records for auditory event-related responses

  1. Auditory event-related responses to diphthongs in different attention conditions

    DEFF Research Database (Denmark)

    Morris, David Jackson; Steinmetzger, Kurt; Tøndering, John

    2016-01-01

    The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2 = 1000 Hz) and subtle (F2 = 100 Hz) diphthongs, while subjects...... (i) attended to the auditory stimuli, (ii) ignored the auditory stimuli and watched a film, and (iii) diverted their attention to a visual discrimination task. Responses elicited by diphthongs where F2 values rose and fell were found to be different and this precluded their combined analysis....... Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude...

  2. Event-related potentials to visual, auditory, and bimodal (combined auditory-visual) stimuli.

    Science.gov (United States)

    Isoğlu-Alkaç, Ummühan; Kedzior, Karina; Keskindemirci, Gonca; Ermutlu, Numan; Karamursel, Sacit

    2007-02-01

    The purpose of this study was to investigate the response properties of event related potentials to unimodal and bimodal stimulations. The amplitudes of N1 and P2 were larger during bimodal evoked potentials (BEPs) than auditory evoked potentials (AEPs) in the anterior sites and the amplitudes of P1 were larger during BEPs than VEPs especially at the parieto-occipital locations. Responses to bimodal stimulation had longer latencies than responses to unimodal stimulation. The N1 and P2 components were larger in amplitude and longer in latency during the bimodal paradigm and predominantly occurred at the anterior sites. Therefore, the current bimodal paradigm can be used to investigate the involvement and location of specific neural generators that contribute to higher processing of sensory information. Moreover, this paradigm may be a useful tool to investigate the level of sensory dysfunctions in clinical samples.

  3. From sensation to percept: the neural signature of auditory event-related potentials.

    Science.gov (United States)

    Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven

    2014-05-01

    An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300). Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Intelligence and P3 Components of the Event-Related Potential Elicited during an Auditory Discrimination Task with Masking

    Science.gov (United States)

    De Pascalis, V.; Varriale, V.; Matteoli, A.

    2008-01-01

    The relationship between fluid intelligence (indexed by scores on Raven Progressive Matrices) and auditory discrimination ability was examined by recording event-related potentials from 48 women during the performance of an auditory oddball task with backward masking. High ability (HA) subjects exhibited shorter response times, greater response…

  5. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    Science.gov (United States)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  6. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth.

    Science.gov (United States)

    van den Heuvel, Marion I; Otte, Renée A; Braeken, Marijke A K A; Winkler, István; Kushnerenko, Elena; Van den Bergh, Bea R H

    2015-07-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was delivered to infants with a frequent repetitive tone and three rare auditory events. The three rare events included a shorter than the regular inter-stimulus interval (ISI-deviant), white noise segments, and environmental sounds. The results suggest that the N250 infantile AERP component emerges during this period in response to white noise but not to environmental sounds, possibly indicating a developmental step towards separating acoustic deviance from contextual novelty. The scalp distribution of the AERP response to both the white noise and the environmental sounds shifted towards frontal areas and AERP peak latencies were overall lower in infants at 4 than at 2 months of age. These observations indicate improvements in the speed of sound processing and maturation of the frontal attentional network in infants during this period. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    Science.gov (United States)

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  8. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  9. Nicotine enhances an auditory Event-Related Potential component which is inversely related to habituation.

    Science.gov (United States)

    Veltri, Theresa; Taroyan, Naira; Overton, Paul G

    2017-07-01

    Nicotine is a psychoactive substance that is commonly consumed in the context of music. However, the reason why music and nicotine are co-consumed is uncertain. One possibility is that nicotine affects cognitive processes relevant to aspects of music appreciation in a beneficial way. Here we investigated this possibility using Event-Related Potentials. Participants underwent a simple decision-making task (to maintain attentional focus), responses to which were signalled by auditory stimuli. Unlike previous research looking at the effects of nicotine on auditory processing, we used complex tones that varied in pitch, a fundamental element of music. In addition, unlike most other studies, we tested non-smoking subjects to avoid withdrawal-related complications. We found that nicotine (4.0 mg, administered as gum) increased P2 amplitude in the frontal region. Since a decrease in P2 amplitude and latency is related to habituation processes, and an enhanced ability to disengage from irrelevant stimuli, our findings suggest that nicotine may cause a reduction in habituation, resulting in non-smokers being less able to adapt to repeated stimuli. A corollary of that decrease in adaptation may be that nicotine extends the temporal window during which a listener is able and willing to engage with a piece of music.

  10. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    Science.gov (United States)

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  11. Evaluation of auditory perception development in neonates by event-related potential technique.

    Science.gov (United States)

    Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan

    2017-08-01

    To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both Pbrain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both Pdevelopment. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  12. Auditory event-related potentials associated with perceptual reversals of bistable pitch motion.

    Science.gov (United States)

    Davidson, Gray D; Pitts, Michael A

    2014-01-01

    Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.

  13. Children's Performance on Pseudoword Repetition Depends on Auditory Trace Quality: Evidence from Event-Related Potentials.

    Science.gov (United States)

    Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto

    1999-01-01

    Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…

  14. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth

    NARCIS (Netherlands)

    van den Heuvel, Marion I.; Otte, Renee A.; Braeken, Marijke A. K. A.; Winkler, Istvan; Kushnerenko, Elena; Van den Bergh, Bea R. H.

    2015-01-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was

  15. A hierarchy of event-related potential markers of auditory processing in disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Steve Beukema

    2016-01-01

    Full Text Available Functional neuroimaging of covert perceptual and cognitive processes can inform the diagnoses and prognoses of patients with disorders of consciousness, such as the vegetative and minimally conscious states (VS;MCS. Here we report an event-related potential (ERP paradigm for detecting a hierarchy of auditory processes in a group of healthy individuals and patients with disorders of consciousness. Simple cortical responses to sounds were observed in all 16 patients; 7/16 (44% patients exhibited markers of the differential processing of speech and noise; and 1 patient produced evidence of the semantic processing of speech (i.e. the N400 effect. In several patients, the level of auditory processing that was evident from ERPs was higher than the abilities that were evident from behavioural assessment, indicating a greater sensitivity of ERPs in some cases. However, there were no differences in auditory processing between VS and MCS patient groups, indicating a lack of diagnostic specificity for this paradigm. Reliably detecting semantic processing by means of the N400 effect in passively listening single-subjects is a challenge. Multiple assessment methods are needed in order to fully characterise the abilities of patients with disorders of consciousness.

  16. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  17. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment

    Directory of Open Access Journals (Sweden)

    Joe Bathelt

    2017-10-01

    Full Text Available Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve. Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320 ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities.

  18. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  19. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    Science.gov (United States)

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Attention-related modulation of auditory brainstem responses during contralateral noise exposure.

    Science.gov (United States)

    Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko

    2008-10-29

    As determinants facilitating attention-related modulation of the auditory brainstem response (ABR), two experimental factors were examined: (i) auditory discrimination; and (ii) contralateral masking intensity. Tone pips at 80 dB sound pressure level were presented to the left ear via either single-tone exposures or oddball exposures, whereas white noise was delivered continuously to the right ear at variable intensities (none--80 dB sound pressure level). Participants each conducted two tasks during stimulation, either reading a book (ignoring task) or detecting target tones (attentive task). Task-related modulation within the ABR range was found only during oddball exposures at contralateral masking intensities greater than or equal to 60 dB. Attention-related modulation of ABR can thus be detected reliably during auditory discrimination under contralateral masking of sufficient intensity.

  2. Visual-induced expectations modulate auditory cortical responses

    Directory of Open Access Journals (Sweden)

    Virginie evan Wassenhove

    2015-02-01

    Full Text Available Active sensing has important consequences on multisensory processing (Schroeder et al. 2010. Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient colour changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the where and the when of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG while maintaining the position of their eyes on the left, right, or centre of the screen. Participants counted colour changes of the fixation cross while neglecting sounds which could be presented to the left, right or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants’ attention directed to visual inputs. Second, colour changes elicited robust modulations of auditory cortex responses (when prediction seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of when a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that where predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.

  3. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials

    Directory of Open Access Journals (Sweden)

    Elena V Orekhova

    2014-02-01

    Full Text Available The extended phenotype of autism spectrum disorders (ASD includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs and magnetic fields (ERFs may help to reveal which processing stages underlying brain response to unattended but salient sensory event are affected in individuals with ASD. Previous research focusing on two sequential stages of the brain response - automatic detection of physical changes in auditory stream, indexed by mismatch negativity (MMN, and evaluation of stimulus novelty, indexed by P3a component, - found in individuals with ASD either increased, decreased or normal processing of deviance and novelty. The review examines these apparently conflicting results, notes gaps in previous findings, and suggests a potentially unifying hypothesis relating the dampened responses to unattended sensory events to the deficit in rapid arousal process. Specifically, ‘sensory gating’ studies focused on pre-attentive arousal consistently demonstrated that brain response to unattended and temporally novel sound in ASD is already affected at around 100 ms after stimulus onset. We hypothesize that abnormalities in nicotinic cholinergic arousal pathways, previously reported in individuals with ASD, may contribute to these ERP/ERF aberrations and result in attention re-orienting deficit. Such cholinergic dysfunction may be present in individuals with ASD early in life and can influence both sensory processing and attention re-orienting behavior. Identification of early neurophysiological biomarkers for cholinergic deficit would help to detect infants at risk who can potentially benefit from particular types of therapies or interventions.

  4. Concentrated pitch discrimination modulates auditory brainstem responses during contralateral noise exposure.

    Science.gov (United States)

    Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko

    2010-03-31

    This study examined a notion that auditory discrimination is a requisite for attention-related modulation of the auditory brainstem response (ABR) during contralateral noise exposure. Given that the right ear was exposed continuously with white noise at an intensity of 60-80 dB sound pressure level, tone pips at 80 dB sound pressure level were delivered to the left ear through either single-stimulus or oddball procedures. Participants conducted reading (ignoring task) and counting target tones (attentive task) during stimulation. The oddball but not the single-stimulus procedures elicited task-related modulations in both early (ABR) and late (processing negativity) event-related potentials simultaneously. The elicitation of the attention-related ABR modulation during contralateral noise exposure is thus considered to require auditory discrimination and have the corticofugal nature evidently.

  5. Auditory event-related potentials in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Tomé, David; Sampaio, Mafalda; Mendes-Ribeiro, José; Barbosa, Fernando; Marques-Teixeira, João

    2014-12-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of idiopathic epilepsy, with onset from age 3 to 14 years. Although the prognosis for children with BECTS is excellent, some studies have revealed neuropsychological deficits in many domains, including language. Auditory event-related potentials (AERPs) reflect activation of different neuronal populations and are suggested to contribute to the evaluation of auditory discrimination (N1), attention allocation and phonological categorization (N2), and echoic memory (mismatch negativity--MMN). The scarce existing literature about this theme motivated the present study, which aims to investigate and document the existing AERP changes in a group of children with BECTS. AERPs were recorded, during the day, to pure and vocal tones and in a conventional auditory oddball paradigm in five children with BECTS (aged 8-12; mean=10 years; male=5) and in six gender and age-matched controls. Results revealed high amplitude of AERPs for the group of children with BECTS with a slight latency delay more pronounced in fronto-central electrodes. Children with BECTS may have abnormal central auditory processing, reflected by electrophysiological measures such as AERPs. In advance, AERPs seem a good tool to detect and reliably reveal cortical excitability in children with typical BECTS. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  7. Saturation of auditory short-term memory causes a plateau in the sustained anterior negativity event-related potential.

    Science.gov (United States)

    Alunni-Menichini, Kristelle; Guimond, Synthia; Bermudez, Patrick; Nolden, Sophie; Lefebvre, Christine; Jolicoeur, Pierre

    2014-12-10

    The maintenance of information in auditory short-term memory (ASTM) is accompanied by a sustained anterior negativity (SAN) in the event-related potential measured during the retention interval of simple auditory memory tasks. Previous work on ASTM showed that the amplitude of the SAN increased in negativity as the number of maintained items increases. The aim of the current study was to measure the SAN and observe its behavior beyond the point of saturation of auditory short-term memory. We used atonal pure tones in sequences of 2, 4, 6, or 8t. Our results showed that the amplitude of SAN increased in negativity from 2 to 4 items and then levelled off from 4 to 8 items. Behavioral results suggested that the average span in the task was slightly below 3, which was consistent with the observed plateau in the electrophysiological results. Furthermore, the amplitude of the SAN predicted individual differences in auditory memory capacity. The results support the hypothesis that the SAN is an electrophysiological index of brain activity specifically related to the maintenance of auditory information in ASTM. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Science.gov (United States)

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  9. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    Directory of Open Access Journals (Sweden)

    Scott A Stone

    Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  10. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  11. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  12. Acute nicotine fails to alter event-related potential or behavioral performance indices of auditory distraction in cigarette smokers.

    Science.gov (United States)

    Knott, Verner J; Scherling, Carole S; Blais, Crystal M; Camarda, Jordan; Fisher, Derek J; Millar, Anne; McIntosh, Judy F

    2006-04-01

    Behavioral studies have shown that nicotine enhances performance in sustained attention tasks, but they have not shown convincing support for the effects of nicotine on tasks requiring selective attention or attentional control under conditions of distraction. We investigated distractibility in 14 smokers (7 females) with event-related brain potentials (ERPs) and behavioral performance measures extracted from an auditory discrimination task requiring a choice reaction time response to short- and long-duration tones, both with and without embedded deviants. Nicotine gum (4 mg), administered in a randomized, double-blind, placebo-controlled crossover design, failed to counter deviant-elicited behavioral distraction (i.e., slower reaction times and increased response errors), and it did not influence the distracter-elicited mismatch negativity, the P300a, or the reorienting negativity ERP components reflecting acoustic change detection, involuntary attentional switching, and attentional reorienting, respectively. Results are discussed in relation to a stimulus-filter model of smoking and in relation to future research directions.

  13. An Auditory Go/No-Go Study of Event-Related Potentials in Children with Fetal Alcohol Spectrum Disorders

    DEFF Research Database (Denmark)

    Steinmann, Tobias P.; Andrew, Colin M.; Thomsen, Carsten E.

    2011-01-01

    Abstract—In this study event-related potentials (ERPs) were used to investigate the effects of prenatal alcohol exposure on response inhibition identified during task performance. ERPs were recorded during a auditory Go/No Go task in two groups of children with mean age of 12:8years (11years to 14......:7years): one diagnosed with fetal alcohol syndrome (FAS) or partial FAS (FAS/PFAS; n = 12) and a control group of children of same age whose mothers abstained from alcohol or drank minimally during pregnancy (n = 11). The children were instructed to push a button in response to the Go stimulus...... trials, suggesting a less efficient early classification of the stimulus. P3 showed larger amplitudes to No-Go vs. Go in both groups. The study has provided new evidence for inhibition deficits in FAS/PFAS subjects identified by ERPs....

  14. Changes of auditory event-related potentials in ovariectomized rats injected with d-galactose: Protective role of rosmarinic acid.

    Science.gov (United States)

    Kantar-Gok, Deniz; Hidisoglu, Enis; Er, Hakan; Acun, Alev Duygu; Olgar, Yusuf; Yargıcoglu, Piraye

    2017-09-01

    Rosmarinic acid (RA), which has multiple bioactive properties, might be a useful agent for protecting central nervous system against age related alterations. In this context, the purpose of the present study was to investigate possible protective effects of RA on mismatch negativity (MMN) component of auditory event-related potentials (AERPs) as an indicator of auditory discrimination and echoic memory in the ovariectomized (OVX) rats injected with d-galactose combined with neurochemical and histological analyses. Ninety female Wistar rats were randomly divided into six groups: sham control (S); RA-treated (R); OVX (O); OVX+RA-treated (OR); OVX+d-galactose-treated (OD); OVX+d-galactose+RA-treated (ODR). Eight weeks later, MMN responses were recorded using the oddball condition. An amplitude reduction of some components of AERPs was observed due to ovariectomy with or without d-galactose administiration and these reduction patterns were diverse for different electrode locations. MMN amplitudes were significantly lower over temporal and right frontal locations in the O and OD groups versus the S and R groups, which was accompanied by increased thiobarbituric acid reactive substances (TBARS) and hydroxy-2-nonenal (4-HNE) levels. RA treatment significantly increased AERP/MMN amplitudes and lowered the TBARS/4-HNE levels in the OR and ODR groups versus the O and OD groups, respectively. Our findings support the potential benefit of RA in the prevention of auditory distortion related to the estrogen deficiency and d-galactose administration at least partly by antioxidant actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Auditory selective attention in adolescents with major depression: An event-related potential study.

    Science.gov (United States)

    Greimel, E; Trinkl, M; Bartling, J; Bakos, S; Grossheinrich, N; Schulte-Körne, G

    2015-02-01

    Major depression (MD) is associated with deficits in selective attention. Previous studies in adults with MD using event-related potentials (ERPs) reported abnormalities in the neurophysiological correlates of auditory selective attention. However, it is yet unclear whether these findings can be generalized to MD in adolescence. Thus, the aim of the present ERP study was to explore the neural mechanisms of auditory selective attention in adolescents with MD. 24 male and female unmedicated adolescents with MD and 21 control subjects were included in the study. ERPs were collected during an auditory oddball paradigm. Depressive adolescents tended to show a longer N100 latency to target and non-target tones. Moreover, MD subjects showed a prolonged latency of the P200 component to targets. Across groups, longer P200 latency was associated with a decreased tendency of disinhibited behavior as assessed by a behavioral questionnaire. To be able to draw more precise conclusions about differences between the neural bases of selective attention in adolescents vs. adults with MD, future studies should include both age groups and apply the same experimental setting across all subjects. The study provides strong support for abnormalities in the neurophysiolgical bases of selective attention in adolecents with MD at early stages of auditory information processing. Absent group differences in later ERP components reflecting voluntary attentional processes stand in contrast to results reported in adults with MD and may suggest that adolescents with MD possess mechanisms to compensate for abnormalities in the early stages of selective attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  17. Neuronal activity in primate prefrontal cortex related to goal-directed behavior during auditory working memory tasks.

    Science.gov (United States)

    Huang, Ying; Brosch, Michael

    2016-06-01

    Prefrontal cortex (PFC) has been documented to play critical roles in goal-directed behaviors, like representing goal-relevant events and working memory (WM). However, neurophysiological evidence for such roles of PFC has been obtained mainly with visual tasks but rarely with auditory tasks. In the present study, we tested roles of PFC in auditory goal-directed behaviors by recording local field potentials in the auditory region of left ventrolateral PFC while a monkey performed auditory WM tasks. The tasks consisted of multiple events and required the monkey to change its mental states to achieve the reward. The events were auditory and visual stimuli, as well as specific actions. Mental states were engaging in the tasks and holding task-relevant information in auditory WM. We found that, although based on recordings from one hemisphere in one monkey only, PFC represented multiple events that were important for achieving reward, including auditory and visual stimuli like turning on and off an LED, as well as bar touch. The responses to auditory events depended on the tasks and on the context of the tasks. This provides support for the idea that neuronal representations in PFC are flexible and can be related to the behavioral meaning of stimuli. We also found that engaging in the tasks and holding information in auditory WM were associated with persistent changes of slow potentials, both of which are essential for auditory goal-directed behaviors. Our study, on a single hemisphere in a single monkey, reveals roles of PFC in auditory goal-directed behaviors similar to those in visual goal-directed behaviors, suggesting that functions of PFC in goal-directed behaviors are probably common across the auditory and visual modality. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Effects of acute nicotine on event-related potential and performance indices of auditory distraction in nonsmokers.

    Science.gov (United States)

    Knott, Verner J; Bolton, Kiley; Heenan, Adam; Shah, Dhrasti; Fisher, Derek J; Villeneuve, Crystal

    2009-05-01

    Although nicotine has been purported to enhance attentional processes, this has been evidenced mostly in tasks of sustained attention, and its effects on selective attention and attentional control under conditions of distraction are less convincing. This study investigated the effects of nicotine on distractibility in 21 (11 males) nonsmokers with event-related potentials (ERPs) and behavioral performance measures extracted from an auditory discrimination task requiring a choice reaction time response to short- and long-duration tones, with and without imbedded deviants. Administered in a randomized, double-blind, placebo-controlled crossover design, nicotine gum (6 mg) failed to counter deviant-elicited behavioral distraction characterized by longer reaction times and increased response errors. Of the deviant-elicited ERP components, nicotine did not alter the P3a-indexed attentional switching to the deviant, but in females, it tended to diminish the automatic processing of the deviant as shown by a smaller mismatch negativity component, and it attenuated attentional reorienting following deviant-elicited distraction, as reflected by a reduced reorienting negativity ERP component. Results are discussed in relation to attentional models of nicotine and with respect to future research directions.

  19. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  20. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  1. Auditory cortical and hippocampal-system mismatch responses to duration deviants in urethane-anesthetized rats.

    Directory of Open Access Journals (Sweden)

    Timo Ruusuvirta

    Full Text Available Any change in the invariant aspects of the auditory environment is of potential importance. The human brain preattentively or automatically detects such changes. The mismatch negativity (MMN of event-related potentials (ERPs reflects this initial stage of auditory change detection. The origin of MMN is held to be cortical. The hippocampus is associated with a later generated P3a of ERPs reflecting involuntarily attention switches towards auditory changes that are high in magnitude. The evidence for this cortico-hippocampal dichotomy is scarce, however. To shed further light on this issue, auditory cortical and hippocampal-system (CA1, dentate gyrus, subiculum local-field potentials were recorded in urethane-anesthetized rats. A rare tone in duration (deviant was interspersed with a repeated tone (standard. Two standard-to-standard (SSI and standard-to-deviant (SDI intervals (200 ms vs. 500 ms were applied in different combinations to vary the observability of responses resembling MMN (mismatch responses. Mismatch responses were observed at 51.5-89 ms with the 500-ms SSI coupled with the 200-ms SDI but not with the three remaining combinations. Most importantly, the responses appeared in both the auditory-cortical and hippocampal locations. The findings suggest that the hippocampus may play a role in (cortical manifestation of MMN.

  2. Trait aspects of auditory mismatch negativity predict response to auditory training in individuals with early illness schizophrenia.

    Science.gov (United States)

    Biagianti, Bruno; Roach, Brian J; Fisher, Melissa; Loewy, Rachel; Ford, Judith M; Vinogradov, Sophia; Mathalon, Daniel H

    2017-01-01

    Individuals with schizophrenia have heterogeneous impairments of the auditory processing system that likely mediate differences in the cognitive gains induced by auditory training (AT). Mismatch negativity (MMN) is an event-related potential component reflecting auditory echoic memory, and its amplitude reduction in schizophrenia has been linked to cognitive deficits. Therefore, MMN may predict response to AT and identify individuals with schizophrenia who have the most to gain from AT. Furthermore, to the extent that AT strengthens auditory deviance processing, MMN may also serve as a readout of the underlying changes in the auditory system induced by AT. Fifty-six individuals early in the course of a schizophrenia-spectrum illness (ESZ) were randomly assigned to 40 h of AT or Computer Games (CG). Cognitive assessments and EEG recordings during a multi-deviant MMN paradigm were obtained before and after AT and CG. Changes in these measures were compared between the treatment groups. Baseline and trait-like MMN data were evaluated as predictors of treatment response. MMN data collected with the same paradigm from a sample of Healthy Controls (HC; n = 105) were compared to baseline MMN data from the ESZ group. Compared to HC, ESZ individuals showed significant MMN reductions at baseline ( p = .003). Reduced Double-Deviant MMN was associated with greater general cognitive impairment in ESZ individuals ( p = .020). Neither ESZ intervention group showed significant change in MMN. We found high correlations in all MMN deviant types (rs = .59-.68, all ps < .001) between baseline and post-intervention amplitudes irrespective of treatment group, suggesting trait-like stability of the MMN signal. Greater deficits in trait-like Double-Deviant MMN predicted greater cognitive improvements in the AT group ( p = .02), but not in the CG group. In this sample of ESZ individuals, AT had no effect on auditory deviance processing as assessed by MMN. In ESZ individuals, baseline MMN

  3. Auditory sensory memory in 2-year-old children: an event-related potential study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-03-26

    Auditory sensory memory is assumed to play an important role in cognitive development, but little is known about it in young children. The aim of this study was to estimate the duration of auditory sensory memory in 2-year-old children. We recorded the mismatch negativity in response to tone stimuli presented with different interstimulus intervals. Our findings suggest that in 2-year-old children the memory representation of the standard tone remains in the sensory memory store for at least 1 s but for less than 2 s. Recording the mismatch negativity with stimuli presented at various interstimulus intervals seems to be a useful method for studying the relationship between auditory sensory memory and normal and disturbed cognitive development.

  4. DESCRIPTION OF BRAINSTEM AUDITORY EVOKED RESPONSES (AIR AND BONE CONDUCTION IN CHILDREN WITH NORMAL HEARING

    Directory of Open Access Journals (Sweden)

    A. V. Pashkov

    2014-01-01

    Full Text Available Diagnosis of hearing level in small children with conductive hearing loss associated with congenital craniofacial abnormalities, particularly with agenesis of external ear and external auditory meatus is a pressing issue. Conventional methods of assessing hearing in the first years of life, i. e. registration of brainstem auditory evoked responses to acoustic stimuli in the event of air conduction, does not give an indication of the auditory analyzer’s condition due to potential conductive hearing loss in these patients. This study was aimed at assessing potential of diagnosing the auditory analyzer’s function with registering brainstem auditory evoked responses (BAERs to acoustic stimuli transmitted by means of a bone vibrator. The study involved 17 children aged 3–10 years with normal hearing. We compared parameters of registering brainstem auditory evoked responses (peak V depending on the type of stimulus transmission (air/bone in children with normal hearing. The data on thresholds of the BAERs registered to acoustic stimuli in the event of air and bone conduction obtained in this study are comparable; hearing thresholds in the event of acoustic stimulation by means of a bone vibrator correlates with the results of the BAERs registered to the stimuli transmitted by means of air conduction earphones (r = 0.9. High correlation of thresholds of BAERs to the stimuli transmitted by means of a bone vibrator with thresholds of BAERs registered when air conduction earphones were used helps to assess auditory analyzer’s condition in patients with any form of conductive hearing loss.  

  5. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  6. Prestimulus subsequent memory effects for auditory and visual events.

    Science.gov (United States)

    Otten, Leun J; Quayle, Angela H; Puvaneswaran, Bhamini

    2010-06-01

    It has been assumed that the effective encoding of information into memory primarily depends on neural activity elicited when an event is initially encountered. Recently, it has been shown that memory formation also relies on neural activity just before an event. The precise role of such activity in memory is currently unknown. Here, we address whether prestimulus activity affects the encoding of auditory and visual events, is set up on a trial-by-trial basis, and varies as a function of the type of recognition judgment an item later receives. Electrical brain activity was recorded from the scalps of 24 healthy young adults while they made semantic judgments on randomly intermixed series of visual and auditory words. Each word was preceded by a cue signaling the modality of the upcoming word. Auditory words were preceded by auditory cues and visual words by visual cues. A recognition memory test with remember/know judgments followed after a delay of about 45 min. As observed previously, a negative-going, frontally distributed modulation just before visual word onset predicted later recollection of the word. Crucially, the same effect was found for auditory words and observed on stay as well as switch trials. These findings emphasize the flexibility and general role of prestimulus activity in memory formation, and support a functional interpretation of the activity in terms of semantic preparation. At least with an unpredictable trial sequence, the activity is set up anew on each trial.

  7. Different event-related patterns of gamma-band power in brain waves of fast- and slow-reacting subjects.

    Science.gov (United States)

    Jokeit, H; Makeig, S

    1994-01-01

    Fast- and slow-reacting subjects exhibit different patterns of gamma-band electroencephalogram (EEG) activity when responding as quickly as possible to auditory stimuli. This result appears to confirm long-standing speculations of Wundt that fast- and slow-reacting subjects produce speeded reactions in different ways and demonstrates that analysis of event-related changes in the amplitude of EEG activity recorded from the human scalp can reveal information about event-related brain processes unavailable using event-related potential measures. Time-varying spectral power in a selected (35- to 43-Hz) gamma frequency band was averaged across trials in two experimental conditions: passive listening and speeded reacting to binaural clicks, forming 40-Hz event-related spectral responses. Factor analysis of between-subject event-related spectral response differences split subjects into two near-equal groups composed of faster- and slower-reacting subjects. In faster-reacting subjects, 40-Hz power peaked near 200 ms and 400 ms poststimulus in the react condition, whereas in slower-reacting subjects, 40-Hz power just before stimulus delivery was larger in the react condition. These group differences were preserved in separate averages of relatively long and short reaction-time epochs for each group. gamma-band (20-60 Hz)-filtered event-related potential response averages did not differ between the two groups or conditions. Because of this and because gamma-band power in the auditory event-related potential is small compared with the EEG, the observed event-related spectral response features must represent gamma-band EEG activity reliably induced by, but not phase-locked to, experimental stimuli or events. PMID:8022783

  8. Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions.

    Science.gov (United States)

    van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean

    2017-04-15

    A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  10. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Science.gov (United States)

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  11. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    Science.gov (United States)

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Prepulse inhibition of auditory change-related cortical responses

    Directory of Open Access Journals (Sweden)

    Inui Koji

    2012-10-01

    Full Text Available Abstract Background Prepulse inhibition (PPI of the startle response is an important tool to investigate the biology of schizophrenia. PPI is usually observed by use of a startle reflex such as blinking following an intense sound. A similar phenomenon has not been reported for cortical responses. Results In 12 healthy subjects, change-related cortical activity in response to an abrupt increase of sound pressure by 5 dB above the background of 65 dB SPL (test stimulus was measured using magnetoencephalography. The test stimulus evoked a clear cortical response peaking at around 130 ms (Change-N1m. In Experiment 1, effects of the intensity of a prepulse (0.5 ~ 5 dB on the test response were examined using a paired stimulation paradigm. In Experiment 2, effects of the interval between the prepulse and test stimulus were examined using interstimulus intervals (ISIs of 50 ~ 350 ms. When the test stimulus was preceded by the prepulse, the Change-N1m was more strongly inhibited by a stronger prepulse (Experiment 1 and a shorter ISI prepulse (Experiment 2. In addition, the amplitude of the test Change-N1m correlated positively with both the amplitude of the prepulse-evoked response and the degree of inhibition, suggesting that subjects who are more sensitive to the auditory change are more strongly inhibited by the prepulse. Conclusions Since Change-N1m is easy to measure and control, it would be a valuable tool to investigate mechanisms of sensory gating or the biology of certain mental diseases such as schizophrenia.

  13. Event-related delta, theta, alpha and gamma correlates to auditory oddball processing during Vipassana meditation

    Science.gov (United States)

    Delorme, Arnaud; Polich, John

    2013-01-01

    Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958

  14. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  15. Dividing time: Concurrent timing of auditory and visual events by young and elderly adults

    OpenAIRE

    McAuley, J. Devin; Miller, Jonathan P.; Wang, Mo; Pang, Kevin C. H.

    2010-01-01

    This article examines age differences in individual’s ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults in contrast showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory ta...

  16. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  17. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings.

    Science.gov (United States)

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control.

  18. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings

    Science.gov (United States)

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479

  19. Electrophysiological Evidence of Developmental Changes in the Duration of Auditory Sensory Memory.

    Science.gov (United States)

    Gomes, Hilary; And Others

    1999-01-01

    Investigated developmental change in duration of auditory sensory memory for tonal frequency by measuring mismatch negativity, an electrophysiological component of the auditory event-related potential that is relatively insensitive to attention and does not require a behavioral response. Findings among children and adults suggest that there are…

  20. Effects of twenty-minute 3G mobile phone irradiation on event related potential components and early gamma synchronization in auditory oddball paradigm.

    Science.gov (United States)

    Stefanics, G; Thuróczy, G; Kellényi, L; Hernádi, I

    2008-11-19

    We investigated the potential effects of 20 min irradiation from a new generation Universal Mobile Telecommunication System (UMTS) 3G mobile phone on human event related potentials (ERPs) in an auditory oddball paradigm. In a double-blind task design, subjects were exposed to either genuine or sham irradiation in two separate sessions. Before and after irradiation subjects were presented with a random series of 50 ms tone burst (frequent standards: 1 kHz, P=0.8, rare deviants: 1.5 kHz, P=0.2) at a mean repetition rate of 1500 ms while electroencephalogram (EEG) was recorded. The subjects' task was to silently count the appearance of targets. The amplitude and latency of the N100, N200, P200 and P300 components for targets and standards were analyzed in 29 subjects. We found no significant effects of electromagnetic field (EMF) irradiation on the amplitude and latency of the above ERP components. In order to study possible effects of EMF on attentional processes, we applied a wavelet-based time-frequency method to analyze the early gamma component of brain responses to auditory stimuli. We found that the early evoked gamma activity was insensitive to UMTS RF exposition. Our results support the notion, that a single 20 min irradiation from new generation 3G mobile phones does not induce measurable changes in latency or amplitude of ERP components or in oscillatory gamma-band activity in an auditory oddball paradigm.

  1. Dividing time: concurrent timing of auditory and visual events by young and elderly adults.

    Science.gov (United States)

    McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H

    2010-07-01

    This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.

  2. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  3. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    Science.gov (United States)

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback

  4. Presbycusis and auditory brainstem responses: a review

    Directory of Open Access Journals (Sweden)

    Shilpa Khullar

    2011-06-01

    Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.

  5. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Differential activity in left inferior frontal gyrus for pseudo and real words: an event-related functional MRI study on auditory lexical decision

    International Nuclear Information System (INIS)

    Xiao Zhuangwei; Xu Weixiong; Zhang Xuexin; Wang Xiaoyi; Weng Xuchu; Wu Renhua; Wu Xiaoping

    2006-01-01

    Objective: To study lexical processing of pseudo words and real words by using a fast event-related functional MRI (ER-fMRI) design. Methods: Participants did an auditory lexical decision task on a list of pseudo-randomly intermixed real and pseudo Chinese two-character (or two-syllable) words. Pseudo words were constructed by recombining constituent characters of the real words to control for sublexical codes properties. Results: The behavioral performance of fourteen participants indicated that response to pseudowords was significantly slower and less accurate than to real words (mean error rate: 9.9% versus 3.9%, mean reaction time: 1618 ms versus 1143 ms). Processing of pseudo words and real words activated a highly comparable network of brain regions, including bilateral inferior frontal gyrus, superior, middle temporal gyrus, calcarine and lingual gyrus, and left supramarginal gyrus. Mirroring a behavioral lexical effect, left inferior frontal gyrus (IFG) was significantly more activated for pseudo words than for real words. Conclusion: The results indicate that the processing of left inferior frontal gyrus in judging pseudo words and real words is not related to grapheme-to-phoneme conversion, but rather to making positive versus negative responses in decision making. (authors)

  7. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    Directory of Open Access Journals (Sweden)

    Stefan eBerti

    2013-07-01

    Full Text Available Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection – which does not rely on predictive coding mechanisms – can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection or are presented within an otherwise silent situation (triggering transient detection. In the event-related brain potentials, deviants elicited the mismatch negativity (MMN while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON. This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior.

  8. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...

  9. Assessing cross-modal target transition effects with a visual-auditory oddball.

    Science.gov (United States)

    Kiat, John E

    2018-04-30

    Prior research has shown contextual manipulations involving temporal and sequence related factors significantly moderate attention-related responses, as indexed by the P3b event-related-potential, towards infrequent (i.e., deviant) target oddball stimuli. However, significantly less research has looked at the influence of cross-modal switching on P3b responding, with the impact of target-to-target cross-modal transitions being virtually unstudied. To address this gap, this study recorded high-density (256 electrodes) EEG data from twenty-five participants as they completed a cross-modal visual-auditory oddball task. This task was comprised of unimodal visual (70% Nontargets: 30% Deviant-targets) and auditory (70% Nontargets: 30% Deviant-targets) oddballs presented in fixed alternating order (i.e., visual-auditory-visual-auditory, etc.) with participants being tasked with detecting deviant-targets in both modalities. Differences in the P3b response towards deviant-targets as a function of preceding deviant-target's presentation modality was analyzed using temporal-spatial PCA decomposition. In line with predictions, the results indicate that the ERP response to auditory deviant-targets preceded by visual deviant-targets exhibits an elevated P3b, relative to the processing of auditory deviant-targets preceded by auditory deviant-targets. However, the processing of visual deviant-targets preceded by auditory deviant-targets exhibited a reduced P3b response, relative to the P3b response towards visual deviant-targets preceded by visual deviant-targets. These findings provide the first demonstration of temporally and perceptually decoupled target-to-target cross-modal transitions moderating P3b responses on the oddball paradigm, generally providing support for the context-updating interpretation of the P3b response. Copyright © 2017. Published by Elsevier B.V.

  10. Event-related potentials dissociate perceptual from response-related age effects in visual search

    DEFF Research Database (Denmark)

    Wiegand, Iris; Müller, Hermann J.; Finke, Kathrin

    2013-01-01

    measures with lateralized event-related potentials of younger and older adults performing a compound-search task, in which the target-defining dimension of a pop-out target (color/shape) and the response-critical target feature (vertical/horizontal stripes) varied independently across trials. Slower...... responses in older participants were associated with age differences in all analyzed event-related potentials from perception to response, indicating that behavioral slowing originates from multiple stages within the information-processing stream. Furthermore, analyses of carry-over effects from one trial...

  11. A Basic Study on P300 Event-Related Potentials Evoked by Simultaneous Presentation of Visual and Auditory Stimuli for the Communication Interface

    Directory of Open Access Journals (Sweden)

    Masami Hashimoto

    2011-10-01

    Full Text Available We have been engaged in the development of a brain-computer interface (BCI based on the cognitive P300 event-related potentials (ERPs evoked by simultaneous presentation of visual and auditory stimuli in order to assist with the communication in severe physical limitation persons. The purpose of the simultaneous presentation of these stimuli is to give the user more choices as commands. First, we extracted P300 ERPs by either visual oddball paradigm or auditory oddball paradigm. Then amplitude and latency of the P300 ERPs were measured. Second, visual and auditory stimuli were presented simultaneously, we measured the P300 ERPs varying the condition of combinations of these stimuli. In this report, we used 3 colors as visual stimuli and 3 types of MIDI sounds as auditory stimuli. Two types of simultaneous presentations were examined. The one was conducted with random combination. The other was called group stimulation, combining one color, such as red, and one MIDI sound, such as piano, in order to make a group; three groups were made. Each group was presented to users randomly. We evaluated the possibility of BCI using these stimuli from the amplitudes and the latencies of P300 ERPs.

  12. Relation between derived-band auditory brainstem response latencies and behavioral frequency selectivity

    DEFF Research Database (Denmark)

    Strelcyk, Olaf; Christoforidis, Dimitrios; Dau, Torsten

    2009-01-01

    response times. For the same listeners, auditory-filter bandwidths at 2 kHz were estimated using a behavioral notched-noise masking paradigm. Generally, shorter derived-band latencies were observed for the HI than for the NH listeners. Only at low click sensation levels, prolonged latencies were obtained...

  13. Separate representation of stimulus frequency, intensity, and duration in auditory sensory memory: an event-related potential and dipole-model analysis.

    Science.gov (United States)

    Giard, M H; Lavikahen, J; Reinikainen, K; Perrin, F; Bertrand, O; Pernier, J; Näätänen, R

    1995-01-01

    Abstract The present study analyzed the neural correlates of acoustic stimulus representation in echoic sensory memory. The neural traces of auditory sensory memory were indirectly studied by using the mismatch negativity (MMN), an event-related potential component elicited by a change in a repetitive sound. The MMN is assumed to reflect change detection in a comparison process between the sensory input from a deviant stimulus and the neural representation of repetitive stimuli in echoic memory. The scalp topographies of the MMNs elicited by pure tones deviating from standard tones by either frequency, intensity, or duration varied according to the type of stimulus deviance, indicating that the MMNs for different attributes originate, at least in part, from distinct neural populations in the auditory cortex. This result was supported by dipole-model analysis. If the MMN generator process occurs where the stimulus information is stored, these findings strongly suggest that the frequency, intensity, and duration of acoustic stimuli have a separate neural representation in sensory memory.

  14. Classification of Single-Trial Auditory Events Using Dry-Wireless EEG During Real and Motion Simulated Flight

    Directory of Open Access Journals (Sweden)

    Daniel eCallan

    2015-02-01

    Full Text Available Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound versus silent periods. Evaluation of Independent component analysis and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs 78.3%, Platform On (73.1% vs 71.6%, Biplane Engine Off (81.1% vs 77.4%, and Biplane Engine On (79.2% vs 66.1%. This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  15. Differential activity in left inferior frontal gyrus for pseudowords and real words: an event-related fMRI study on auditory lexical decision.

    Science.gov (United States)

    Xiao, Zhuangwei; Zhang, John X; Wang, Xiaoyi; Wu, Renhua; Hu, Xiaoping; Weng, Xuchu; Tan, Li Hai

    2005-06-01

    After Newman and Twieg and others, we used a fast event-related functional magnetic resonance imaging (fMRI) design and contrasted the lexical processing of pseudowords and real words. Participants carried out an auditory lexical decision task on a list of randomly intermixed real and pseudo Chinese two-character (or two-syllable) words. The pseudowords were constructed by recombining constituent characters of the real words to control for sublexical code properties. Processing of pseudowords and real words activated a highly comparable network of brain regions, including bilateral inferior frontal gyrus, superior, middle temporal gyrus, calcarine and lingual gyrus, and left supramarginal gyrus. Mirroring a behavioral lexical effect, left inferior frontal gyrus (IFG) was significantly more activated for pseudowords than for real words. This result disconfirms a popular view that this area plays a role in grapheme-to-phoneme conversion, as such a conversion process was unnecessary in our task with auditory stimulus presentation. An alternative view was supported that attributes increased activity in left IFG for pseudowords to general processes in decision making, specifically in making positive versus negative responses. Activation in left supramarginal gyrus was of a much larger volume for real words than for pseudowords, suggesting a role of this region in the representation of phonological or semantic information for two-character Chinese words at the lexical level.

  16. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  17. Hippocampal P3-Like Auditory Event-Related Potentials are Disrupted in a Rat Model of Cholinergic Degeneration in Alzheimer's Disease: Reversal by Donepezil Treatment

    DEFF Research Database (Denmark)

    Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe

    2014-01-01

    P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related...... cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup....

  18. Speaking Two Languages Enhances an Auditory but Not a Visual Neural Marker of Cognitive Inhibition

    Directory of Open Access Journals (Sweden)

    Mercedes Fernandez

    2014-09-01

    Full Text Available The purpose of the present study was to replicate and extend our original findings of enhanced neural inhibitory control in bilinguals. We compared English monolinguals to Spanish/English bilinguals on a non-linguistic, auditory Go/NoGo task while recording event-related brain potentials. New to this study was the visual Go/NoGo task, which we included to investigate whether enhanced neural inhibition in bilinguals extends from the auditory to the visual modality. Results confirmed our original findings and revealed greater inhibition in bilinguals compared to monolinguals. As predicted, compared to monolinguals, bilinguals showed increased N2 amplitude during the auditory NoGo trials, which required inhibitory control, but no differences during the Go trials, which required a behavioral response and no inhibition. Interestingly, during the visual Go/NoGo task, event related brain potentials did not distinguish the two groups, and behavioral responses were similar between the groups regardless of task modality. Thus, only auditory trials that required inhibitory control revealed between-group differences indicative of greater neural inhibition in bilinguals. These results show that experience-dependent neural changes associated with bilingualism are specific to the auditory modality and that the N2 event-related brain potential is a sensitive marker of this plasticity.

  19. [Differential effects of attention deficit/hyperactivity disorder subtypes in event-related potentials].

    Science.gov (United States)

    Tamayo-Orrego, Lukas; Osorio Forero, Alejandro; Quintero Giraldo, Lina Paola; Parra Sánchez, José Hernán; Varela, Vilma; Restrepo, Francia

    2015-01-01

    To better understand the neurophysiological substrates in attention deficit/hyperactivity disorder (ADHD), a study was performed on of event-related potentials (ERPs) in Colombian patients with inattentive and combined ADHD. A case-control, cross-sectional study was designed. The sample was composed of 180 subjects between 5 and 15 years of age (mean, 9.25±2.6), from local schools in Manizales. The sample was divided equally in ADHD or control groups and the subjects were paired by age and gender. The diagnosis was made using the DSM-IV-TR criteria, the Conners and WISC-III test, a psychiatric interview (MINIKID), and a medical evaluation. ERPs were recorded in a visual and auditory passive oddball paradigm. Latency and amplitude of N100, N200 and P300 components for common and rare stimuli were used for statistical comparisons. ADHD subjects show differences in the N200 amplitude and P300 latency in the auditory task. The N200 amplitude was reduced in response to visual stimuli. ADHD subjects with combined symptoms show a delayed P300 in response to auditory stimuli, whereas inattentive subjects exhibited differences in the amplitude of N100 and N200. Combined ADHD patients showed longer N100 latency and smaller N200-P300 amplitude compared to inattentive ADHD subjects. The results show differences in the event-related potentials between combined and inattentive ADHD subjects. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  20. Auditory processing in autism spectrum disorder

    DEFF Research Database (Denmark)

    Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher

    2017-01-01

    Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017, 10: 1857–1865....

  1. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  2. Meaning From Environmental Sounds: Types of Signal-Referent Relations and Their Effect on Recognizing Auditory Icons

    Science.gov (United States)

    Keller, Peter; Stevens, Catherine

    2004-01-01

    This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which…

  3. Altered auditory BOLD response to conspecific birdsong in zebra finches with stuttered syllables.

    Directory of Open Access Journals (Sweden)

    Henning U Voss

    2010-12-01

    Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.

  4. Attentional modulation of auditory steady-state responses.

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  5. Comparison of effects of valsartan and amlodipine on cognitive functions and auditory p300 event-related potentials in elderly hypertensive patients.

    Science.gov (United States)

    Katada, Eiichi; Uematsu, Norihiko; Takuma, Yuko; Matsukawa, Noriyuki

    2014-01-01

    We compared the antihypertensive effect of valsartan (VAL) and amlodipine (AML) treatments in elderly hypertensive patients by examining the long-term changes in cognitive function and auditory P300 event-related potentials. We enrolled 20 outpatients, including 12 men and 8 women in the age group of 56 to 81 years who had mild to moderate essential hypertension. The subjects were randomly allocated to receive either 80 mg VAL once a day (10 patients) or 5 mg AML once a day (10 patients). Neuropsychological assessment and auditory P300 event-related potentials were obtained before initiation of VAL or AML treatment and after 6 months of the treatment with VAL or AML. Neuropsychological assessment was evaluated by conducting the Mini-Mental State Examination, the verbal fluency, word-list memory, word-list recall test, word-list recognition, and Trails B tests. Both the groups showed significantly reduced-blood pressure after 6 months of treatment, and the intergroup difference was not significant. The mean baseline Mini-Mental State Examination scores of the VAL and AML groups were not significantly different. Amlodipine treatment did not significantly affect any test score, but VAL treatment significantly increased the word-list memory and word-list recall test scores. Valsartan, and not AML, significantly reduced the mean P300 latency after 6 months. These results suggest that VAL exerts a positive effect on cognitive functions, independent of its antihypertensive effect.

  6. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children.

    Science.gov (United States)

    Badcock, Nicholas A; Preece, Kathryn A; de Wit, Bianca; Glenn, Katharine; Fieder, Nora; Thie, Johnson; McArthur, Genevieve

    2015-01-01

    Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG) system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs) in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children. Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under "passive" and "active" listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz) and 100 deviant (1,200 Hz) tones. In the active condition, they listened to the same stimuli, and were asked to count the number of 'high' (i.e., deviant) tones. Results. Intraclass correlations (ICCs) indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95) in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74). There were few differences between peak amplitude and latency estimates for the two systems. Conclusions. An adapted EPOC EEG system can be used to index children's late auditory ERP peaks (i.e., P1, N1, P2, N2, P3) and their MMN ERP component.

  7. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children

    Directory of Open Access Journals (Sweden)

    Nicholas A. Badcock

    2015-04-01

    Full Text Available Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children.Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under “passive” and “active” listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz and 100 deviant (1,200 Hz tones. In the active condition, they listened to the same stimuli, and were asked to count the number of ‘high’ (i.e., deviant tones.Results. Intraclass correlations (ICCs indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95 in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74. There were few differences between peak amplitude and latency estimates for the two systems.Conclusions. An adapted EPOC EEG system can be used to index children’s late auditory ERP peaks (i.e., P1, N1, P2, N2, P3 and their MMN ERP component.

  8. Direct recordings from the auditory cortex in a cochlear implant user.

    Science.gov (United States)

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.

  9. Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R

    2011-12-01

    The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Behavioral and EEG evidence for auditory memory suppression

    Directory of Open Access Journals (Sweden)

    Maya Elizabeth Cano

    2016-03-01

    Full Text Available The neural basis of motivated forgetting using the Think/No-Think (TNT paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG effects of auditory Think/No-Think in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response times during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: 1 a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500ms, and 2 a sustained Think positivity over parietal electrodes beginning at approximately 600ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition.The observed event-related potential pattern and theta power analysis are similar to that reported in visual Think/No-Think studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  11. Behavioral and EEG Evidence for Auditory Memory Suppression.

    Science.gov (United States)

    Cano, Maya E; Knight, Robert T

    2016-01-01

    The neural basis of motivated forgetting using the Think/No-Think (TNT) paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG) effects of auditory TNT in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response time (RTs) during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: (1) a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500 ms); and (2) a sustained Think positivity over parietal electrodes beginning at approximately 600 ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz) power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition. The observed event-related potential pattern and theta power analysis are similar to that reported in visual TNT studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  12. Category-specific responses to faces and objects in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Kari L Hoffman

    2008-03-01

    Full Text Available Auditory and visual signals often occur together, and the two sensory channels are known to infl uence each other to facilitate perception. The neural basis of this integration is not well understood, although other forms of multisensory infl uences have been shown to occur at surprisingly early stages of processing in cortex. Primary visual cortex neurons can show frequency-tuning to auditory stimuli, and auditory cortex responds selectively to certain somatosensory stimuli, supporting the possibility that complex visual signals may modulate early stages of auditory processing. To elucidate which auditory regions, if any, are responsive to complex visual stimuli, we recorded from auditory cortex and the superior temporal sulcus while presenting visual stimuli consisting of various objects, neutral faces, and facial expressions generated during vocalization. Both objects and conspecifi c faces elicited robust fi eld potential responses in auditory cortex sites, but the responses varied by category: both neutral and vocalizing faces had a highly consistent negative component (N100 followed by a broader positive component (P180 whereas object responses were more variable in time and shape, but could be discriminated consistently from the responses to faces. The face response did not vary within the face category, i.e., for expressive vs. neutral face stimuli. The presence of responses for both objects and neutral faces suggests that auditory cortex receives highly informative visual input that is not restricted to those stimuli associated with auditory components. These results reveal selectivity for complex visual stimuli in a brain region conventionally described as non-visual unisensory cortex.

  13. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Auditory sensory processing deficits in sensory gating and mismatch negativity-like responses in the social isolation rat model of schizophrenia

    DEFF Research Database (Denmark)

    Witten, Louise; Oranje, Bob; Mørk, Arne

    2014-01-01

    Patients with schizophrenia exhibit disturbances in information processing. These disturbances can be investigated with different paradigms of auditory event related potentials (ERP), such as sensory gating in a double click paradigm (P50 suppression) and the mismatch negativity (MMN) component...... in an auditory oddball paradigm. The aim of the current study was to test if rats subjected to social isolation, which is believed to induce some changes that mimic features of schizophrenia, displays alterations in sensory gating and MMN-like response. Male Lister-Hooded rats were separated into two groups; one...... group socially isolated (SI) for 8 weeks and one group housed (GH). Both groups were then tested in a double click sensory gating paradigm and an auditory oddball paradigm (MMN-like) paradigm. It was observed that the SI animals showed reduced sensory gating of the cortical N1 amplitude. Furthermore...

  15. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  16. Separating acoustic deviance from novelty during the first year of life: a review of event-related potential evidence

    Science.gov (United States)

    Kushnerenko, Elena V.; Van den Bergh, Bea R. H.; Winkler, István

    2013-01-01

    Orienting to salient events in the environment is a first step in the development of attention in young infants. Electrophysiological studies have indicated that in newborns and young infants, sounds with widely distributed spectral energy, such as noise and various environmental sounds, as well as sounds widely deviating from their context elicit an event-related potential (ERP) similar to the adult P3a response. We discuss how the maturation of event-related potentials parallels the process of the development of passive auditory attention during the first year of life. Behavioral studies have indicated that the neonatal orientation to high-energy stimuli gradually changes to attending to genuine novelty and other significant events by approximately 9 months of age. In accordance with these changes, in newborns, the ERP response to large acoustic deviance is dramatically larger than that to small and moderate deviations. This ERP difference, however, rapidly decreases within first months of life and the differentiation of the ERP response to genuine novelty from that to spectrally rich but repeatedly presented sounds commences during the same period. The relative decrease of the response amplitudes elicited by high-energy stimuli may reflect development of an inhibitory brain network suppressing the processing of uninformative stimuli. Based on data obtained from healthy full-term and pre-term infants as well as from infants at risk for various developmental problems, we suggest that the electrophysiological indices of the processing of acoustic and contextual deviance may be indicative of the functioning of auditory attention, a crucial prerequisite of learning and language development. PMID:24046757

  17. Event-related brain potential correlates of human auditory sensory memory-trace formation.

    Science.gov (United States)

    Haenschel, Corinna; Vernon, David J; Dwivedi, Prabuddh; Gruzelier, John H; Baldeweg, Torsten

    2005-11-09

    The event-related potential (ERP) component mismatch negativity (MMN) is a neural marker of human echoic memory. MMN is elicited by deviant sounds embedded in a stream of frequent standards, reflecting the deviation from an inferred memory trace of the standard stimulus. The strength of this memory trace is thought to be proportional to the number of repetitions of the standard tone, visible as the progressive enhancement of MMN with number of repetitions (MMN memory-trace effect). However, no direct ERP correlates of the formation of echoic memory traces are currently known. This study set out to investigate changes in ERPs to different numbers of repetitions of standards, delivered in a roving-stimulus paradigm in which the frequency of the standard stimulus changed randomly between stimulus trains. Normal healthy volunteers (n = 40) were engaged in two experimental conditions: during passive listening and while actively discriminating changes in tone frequency. As predicted, MMN increased with increasing number of standards. However, this MMN memory-trace effect was caused mainly by enhancement with stimulus repetition of a slow positive wave from 50 to 250 ms poststimulus in the standard ERP, which is termed here "repetition positivity" (RP). This RP was recorded from frontocentral electrodes when participants were passively listening to or actively discriminating changes in tone frequency. RP may represent a human ERP correlate of rapid and stimulus-specific adaptation, a candidate neuronal mechanism underlying sensory memory formation in the auditory cortex.

  18. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    Science.gov (United States)

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  19. Attentional Modulation of Auditory Steady-State Responses

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex. PMID:25334021

  20. Attentional modulation of auditory steady-state responses.

    Directory of Open Access Journals (Sweden)

    Yatin Mahajan

    Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  1. Dose-dependent suppression by ethanol of transient auditory 40-Hz response.

    Science.gov (United States)

    Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H

    2000-02-01

    Acute alcohol (ethanol) challenge is known to induce various cognitive disturbances, yet the neural basis of the effect is poorly known. The auditory transient evoked gamma-band (40-Hz) oscillatory responses have been suggested to be associated with various perceptual and cognitive functions in humans; however, alcohol effects on auditory 40-Hz responses have not been investigated to date. The objective of the study was to test the dose-related impact of alcohol on auditory transient evoked 40-Hz responses during a selective-attention task. Ten healthy social drinkers ingested, in four separate sessions, 0.00, 0. 25, 0.50, or 0.75 g/kg of 10% (v/v) alcohol solution. The order of the sessions was randomized and a double-blind procedure was employed. During a selective attention task, 300-Hz standard and 330-Hz deviant tones were presented to the left ear, and 1000-Hz standards and 1100-Hz deviants to the right ear of the subjects (P=0. 425 for each standard, P=0.075 for each deviant). The subjects attended to a designated ear, and were to detect the deviants therein while ignoring tones to the other ear. The auditory transient evoked 40-Hz responses elicited by both the attended and unattended standard tones were significantly suppressed by the 0.50 and 0.75 g/kg alcohol doses. Alcohol suppresses auditory transient evoked 40-Hz oscillations already with moderate blood alcohol concentrations. Given the putative role of gamma-band oscillations in cognition, this finding could be associated with certain alcohol-induced cognitive deficits.

  2. Magnetoencephalographic Imaging of Auditory and Somatosensory Cortical Responses in Children with Autism and Sensory Processing Dysfunction

    Directory of Open Access Journals (Sweden)

    Carly Demopoulos

    2017-05-01

    Full Text Available This study compared magnetoencephalographic (MEG imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18, those with sensory processing dysfunction (SPD; N = 13 who do not meet ASD criteria, and typically developing control (TDC; N = 19 participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain.

  3. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    OpenAIRE

    Matusz, P.J.; Thelen, A.; Amrein, S.; Geiser, E.; Anken, J.; Murray, M.M.

    2015-01-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a ...

  4. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  5. Effects of white noise on event-related potentials in somatosensory Go/No-go paradigms.

    Science.gov (United States)

    Ohbayashi, Wakana; Kakigi, Ryusuke; Nakata, Hiroki

    2017-09-06

    Exposure to auditory white noise has been shown to facilitate human cognitive function. This phenomenon is termed stochastic resonance, and a moderate amount of auditory noise has been suggested to benefit individuals in hypodopaminergic states. The present study investigated the effects of white noise on the N140 and P300 components of event-related potentials in somatosensory Go/No-go paradigms. A Go or No-go stimulus was presented to the second or fifth digit of the left hand, respectively, at the same probability. Participants performed somatosensory Go/No-go paradigms while hearing three different white noise levels (45, 55, and 65 dB conditions). The peak amplitudes of Go-P300 and No-go-P300 in ERP waveforms were significantly larger under 55 dB than 45 and 65 dB conditions. White noise did not affect the peak latency of N140 or P300, or the peak amplitude of N140. Behavioral data for the reaction time, SD of reaction time, and error rates showed the absence of an effect by white noise. This is the first event-related potential study to show that exposure to auditory white noise at 55 dB enhanced the amplitude of P300 during Go/No-go paradigms, reflecting changes in the neural activation of response execution and inhibition processing.

  6. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Science.gov (United States)

    Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965

  7. Musical experience, auditory perception and reading-related skills in children.

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.

  8. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  9. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  10. Music training relates to the development of neural mechanisms of selective auditory attention.

    Science.gov (United States)

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  12. Absence of direction-specific cross-modal visual-auditory adaptation in motion-onset event-related potentials.

    Science.gov (United States)

    Grzeschik, Ramona; Lewald, Jörg; Verhey, Jesko L; Hoffmann, Michael B; Getzmann, Stephan

    2016-01-01

    Adaptation to visual or auditory motion affects within-modality motion processing as reflected by visual or auditory free-field motion-onset evoked potentials (VEPs, AEPs). Here, a visual-auditory motion adaptation paradigm was used to investigate the effect of visual motion adaptation on VEPs and AEPs to leftward motion-onset test stimuli. Effects of visual adaptation to (i) scattered light flashes, and motion in the (ii) same or in the (iii) opposite direction of the test stimulus were compared. For the motion-onset VEPs, i.e. the intra-modal adaptation conditions, direction-specific adaptation was observed--the change-N2 (cN2) and change-P2 (cP2) amplitudes were significantly smaller after motion adaptation in the same than in the opposite direction. For the motion-onset AEPs, i.e. the cross-modal adaptation condition, there was an effect of motion history only in the change-P1 (cP1), and this effect was not direction-specific--cP1 was smaller after scatter than after motion adaptation to either direction. No effects were found for later components of motion-onset AEPs. While the VEP results provided clear evidence for the existence of a direction-specific effect of motion adaptation within the visual modality, the AEP findings suggested merely a motion-related, but not a direction-specific effect. In conclusion, the adaptation of veridical auditory motion detectors by visual motion is not reflected by the AEPs of the present study. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    Science.gov (United States)

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  14. Spatial auditory attention is modulated by tactile priming.

    Science.gov (United States)

    Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus

    2005-07-01

    Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.

  15. EEG Channel Selection Using Particle Swarm Optimization for the Classification of Auditory Event-Related Potentials

    Directory of Open Access Journals (Sweden)

    Alejandro Gonzalez

    2014-01-01

    Full Text Available Brain-machine interfaces (BMI rely on the accurate classification of event-related potentials (ERPs and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy.

  16. Blind Separation of Event-Related Brain Responses into Independent Components

    National Research Council Canada - National Science Library

    Makeig, Scott

    1996-01-01

    .... We report here a method for the blind separation of event-related brain responses into spatially stationary and temporally independent subcomponents using an Independent Component Analysis algorithm...

  17. Only low frequency event-related EEG activity is compromised in multiple sclerosis: insights from an independent component clustering analysis.

    Directory of Open Access Journals (Sweden)

    Hanni Kiiski

    Full Text Available Cognitive impairment (CI, often examined with neuropsychological tests such as the Paced Auditory Serial Addition Test (PASAT, affects approximately 65% of multiple sclerosis (MS patients. The P3b event-related potential (ERP, evoked when an infrequent target stimulus is presented, indexes cognitive function and is typically compared across subjects' scalp electroencephalography (EEG data. However, the clustering of independent components (ICs is superior to scalp-based EEG methods because it can accommodate the spatiotemporal overlap inherent in scalp EEG data. Event-related spectral perturbations (ERSPs; event-related mean power spectral changes and inter-trial coherence (ITCs; event-related consistency of spectral phase reveal a more comprehensive overview of EEG activity. Ninety-five subjects (56 MS patients, 39 controls completed visual and auditory two-stimulus P3b event-related potential tasks and the PASAT. MS patients were also divided into CI and non-CI groups (n = 18 in each based on PASAT scores. Data were recorded from 128-scalp EEG channels and 4 IC clusters in the visual, and 5 IC clusters in the auditory, modality were identified. In general, MS patients had significantly reduced ERSP theta power versus controls, and a similar pattern was observed for CI vs. non-CI MS patients. The ITC measures were also significantly different in the theta band for some clusters. The finding that MS patients had reduced P3b task-related theta power in both modalities is a reflection of compromised connectivity, likely due to demyelination, that may have disrupted early processes essential to P3b generation, such as orientating and signal detection. However, for posterior sources, MS patients had a greater decrease in alpha power, normally associated with enhanced cognitive function, which may reflect a compensatory mechanism in response to the compromised early cognitive processing.

  18. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  19. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Science.gov (United States)

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  20. Cognitive event-related potentials in comatose and post-comatose states.

    Science.gov (United States)

    Vanhaudenhuyse, Audrey; Laureys, Steven; Perrin, Fabien

    2008-01-01

    We review the interest of cognitive event-related potentials (ERPs) in comatose, vegetative, or minimally conscious patients. Auditory cognitive ERPs are useful to investigate residual cognitive functions, such as echoic memory (MMN), acoustical and semantic discrimination (P300), and incongruent language detection (N400). While early ERPs (such as the absence of cortical responses on somatosensory-evoked potentials) predict bad outcome, cognitive ERPs (MMN and P300) are indicative of recovery of consciousness. In coma-survivors, cognitive potentials are more frequently obtained when using stimuli that are more ecologic or have an emotional content (such as the patient's own name) than when using classical sine tones.

  1. Neurofeedback-Based Enhancement of Single Trial Auditory Evoked Potentials: Feasibility in Healthy Subjects.

    Science.gov (United States)

    Rieger, Kathryn; Rarra, Marie-Helene; Moor, Nicolas; Diaz Hernandez, Laura; Baenninger, Anja; Razavi, Nadja; Dierks, Thomas; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Previous studies showed a global reduction of the event-related potential component N100 in patients with schizophrenia, a phenomenon that is even more pronounced during auditory verbal hallucinations. This reduction assumingly results from dysfunctional activation of the primary auditory cortex by inner speech, which reduces its responsiveness to external stimuli. With this study, we tested the feasibility of enhancing the responsiveness of the primary auditory cortex to external stimuli with an upregulation of the event-related potential component N100 in healthy control subjects. A total of 15 healthy subjects performed 8 double-sessions of EEG-neurofeedback training over 2 weeks. The results of the used linear mixed effect model showed a significant active learning effect within sessions ( t = 5.99, P < .001) against an unspecific habituation effect that lowered the N100 amplitude over time. Across sessions, a significant increase in the passive condition ( t = 2.42, P = .03), named as carry-over effect, was observed. Given that the carry-over effect is one of the ultimate aims of neurofeedback, it seems reasonable to apply this neurofeedback training protocol to influence the N100 amplitude in patients with schizophrenia. This intervention could provide an alternative treatment option for auditory verbal hallucinations in these patients.

  2. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  3. Emergence of auditory-visual relations from a visual-visual baseline with auditory-specific consequences in individuals with autism.

    Science.gov (United States)

    Varella, André A B; de Souza, Deisy G

    2014-07-01

    Empirical studies have demonstrated that class-specific contingencies may engender stimulus-reinforcer relations. In these studies, crossmodal relations emerged when crossmodal relations comprised the baseline, and intramodal relations emerged when intramodal relations were taught during baseline. This study investigated whether auditory-visual relations (crossmodal) would emerge after participants learned a visual-visual baseline (intramodal) with auditory stimuli presented as specific consequences. Four individuals with autism learned AB and CD relations with class-specific reinforcers. When A1 and C1 were presented as samples, the selections of B1 and D1, respectively, were followed by an edible (R1) and a sound (S1). Selections of B2 and D2 under the control of A2 and C2, respectively, were followed by R2 and S2. Probe trials tested for visual-visual AC, CA, AD, DA, BC, CB, BD, and DB emergent relations and auditory-visual SA, SB, SC, and SD emergent relations. All of the participants demonstrated the emergence of all auditory-visual relations, and three of four participants showed emergence of all visual-visual relations. Thus, the emergence of auditory-visual relations from specific auditory consequences suggests that these relations do not depend on crossmodal baseline training. The procedure has great potential for applied technology to generate auditory-visual discriminations and stimulus classes in the context of behavior-analytic interventions for autism. © Society for the Experimental Analysis of Behavior.

  4. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    Science.gov (United States)

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  5. Gender differences in binaural speech-evoked auditory brainstem response: are they clinically significant?

    Science.gov (United States)

    Jalaei, Bahram; Azmi, Mohd Hafiz Afifi Mohd; Zakaria, Mohd Normani

    2018-05-17

    Binaurally evoked auditory evoked potentials have good diagnostic values when testing subjects with central auditory deficits. The literature on speech-evoked auditory brainstem response evoked by binaural stimulation is in fact limited. Gender disparities in speech-evoked auditory brainstem response results have been consistently noted but the magnitude of gender difference has not been reported. The present study aimed to compare the magnitude of gender difference in speech-evoked auditory brainstem response results between monaural and binaural stimulations. A total of 34 healthy Asian adults aged 19-30 years participated in this comparative study. Eighteen of them were females (mean age=23.6±2.3 years) and the remaining sixteen were males (mean age=22.0±2.3 years). For each subject, speech-evoked auditory brainstem response was recorded with the synthesized syllable /da/ presented monaurally and binaurally. While latencies were not affected (p>0.05), the binaural stimulation produced statistically higher speech-evoked auditory brainstem response amplitudes than the monaural stimulation (p0.80), substantive gender differences were noted in most of speech-evoked auditory brainstem response peaks for both stimulation modes. The magnitude of gender difference between the two stimulation modes revealed some distinct patterns. Based on these clinically significant results, gender-specific normative data are highly recommended when using speech-evoked auditory brainstem response for clinical and future applications. The preliminary normative data provided in the present study can serve as the reference for future studies on this test among Asian adults. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  6. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  7. A template-free approach for determining the latency of single events of auditory evoked M100

    Energy Technology Data Exchange (ETDEWEB)

    Burghoff, M [Physikalisch-Technische Bundesanstalt (PTB), Berlin (Germany); Link, A [Physikalisch-Technische Bundesanstalt (PTB), Berlin (Germany); Salajegheh, A [Cognitive Neuroscience of Language Laboratory, University of Maryland College Park, MD (United States); Elster, C [Physikalisch-Technische Bundesanstalt (PTB), Berlin (Germany); Poeppel, D [Cognitive Neuroscience of Language Laboratory, University of Maryland College Park, MD (United States); Trahms, L [Physikalisch-Technische Bundesanstalt (PTB), Berlin (Germany)

    2005-02-07

    The phase of the complex output of a narrow band Gaussian filter is taken to define the latency of the auditory evoked response M100 recorded by magnetoencephalography. It is demonstrated that this definition is consistent with the conventional peak latency. Moreover, it provides a tool for reducing the number of averages needed for a reliable estimation of the latency. Single-event latencies obtained by this procedure can be used to improve the signal quality of the conventional average by latency adjusted averaging. (note)

  8. Attention-dependent allocation of auditory processing resources as measured by mismatch negativity.

    Science.gov (United States)

    Dittmann-Balcar, A; Thienel, R; Schall, U

    1999-12-16

    Mismatch negativity (MMN) is a pre-attentive event-related potential measure of echoic memory. However, recent studies suggest attention-related modulation of MMN. This study investigates duration-elicited MMN in healthy subjects (n = 12) who were performing a visual discrimination task and, subsequently, an auditory discrimination task in a series of increasing task difficulty. MMN amplitude was found to be maximal at centro-frontal electrode sites without hemispheric differences. Comparison of both attend conditions (visual vs. auditory), revealed larger MMN amplitudes at Fz in the visual task without differences across task difficulty. However, significantly smaller MMN in the most demanding auditory condition supports the notion of limited processing capacity whose resources are modulated by attention in response to task requirements.

  9. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  10. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Comparison of auditory and visual oddball fMRI in schizophrenia.

    Science.gov (United States)

    Collier, Azurii K; Wolf, Daniel H; Valdez, Jeffrey N; Turetsky, Bruce I; Elliott, Mark A; Gur, Raquel E; Gur, Ruben C

    2014-09-01

    Individuals with schizophrenia often suffer from attentional deficits, both in focusing on task-relevant targets and in inhibiting responses to distractors. Schizophrenia also has a differential impact on attention depending on modality: auditory or visual. However, it remains unclear how abnormal activation of attentional circuitry differs between auditory and visual modalities, as these two modalities have not been directly compared in the same individuals with schizophrenia. We utilized event-related functional magnetic resonance imaging (fMRI) to compare patterns of brain activation during an auditory and visual oddball task in order to identify modality-specific attentional impairment. Healthy controls (n=22) and patients with schizophrenia (n=20) completed auditory and visual oddball tasks in separate sessions. For responses to targets, the auditory modality yielded greater activation than the visual modality (A-V) in auditory cortex, insula, and parietal operculum, but visual activation was greater than auditory (V-A) in visual cortex. For responses to novels, A-V differences were found in auditory cortex, insula, and supramarginal gyrus; and V-A differences in the visual cortex, inferior temporal gyrus, and superior parietal lobule. Group differences in modality-specific activation were found only for novel stimuli; controls showed larger A-V differences than patients in prefrontal cortex and the putamen. Furthermore, for patients, greater severity of negative symptoms was associated with greater divergence of A-V novel activation in the visual cortex. Our results demonstrate that patients have more pronounced activation abnormalities in auditory compared to visual attention, and link modality specific abnormalities to negative symptom severity. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Large-scale synchronized activity during vocal deviance detection in the zebra finch auditory forebrain.

    Science.gov (United States)

    Beckers, Gabriël J L; Gahr, Manfred

    2012-08-01

    Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.

  13. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Frederic Venail

    2015-01-01

    Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  14. Auditory perception and attention as reflected by the brain event-related potentials in children with Asperger syndrome.

    Science.gov (United States)

    Lepistö, T; Silokallio, S; Nieminen-von Wendt, T; Alku, P; Näätänen, R; Kujala, T

    2006-10-01

    Language development is delayed and deviant in individuals with autism, but proceeds quite normally in those with Asperger syndrome (AS). We investigated auditory-discrimination and orienting in children with AS using an event-related potential (ERP) paradigm that was previously applied to children with autism. ERPs were measured to pitch, duration, and phonetic changes in vowels and to corresponding changes in non-speech sounds. Active sound discrimination was evaluated with a sound-identification task. The mismatch negativity (MMN), indexing sound-discrimination accuracy, showed right-hemisphere dominance in the AS group, but not in the controls. Furthermore, the children with AS had diminished MMN-amplitudes and decreased hit rates for duration changes. In contrast, their MMN to speech pitch changes was parietally enhanced. The P3a, reflecting involuntary orienting to changes, was diminished in the children with AS for speech pitch and phoneme changes, but not for the corresponding non-speech changes. The children with AS differ from controls with respect to their sound-discrimination and orienting abilities. The results of the children with AS are relatively similar to those earlier obtained from children with autism using the same paradigm, although these clinical groups differ markedly in their language development.

  15. Dyslexia risk gene relates to representation of sound in the auditory brainstem.

    Science.gov (United States)

    Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D

    2017-04-01

    Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Effects of inter- and intramodal selective attention to non-spatial visual stimuli: An event-related potential analysis.

    NARCIS (Netherlands)

    de Ruiter, M.B.; Kok, A.; van der Schoot, M.

    1998-01-01

    Event-related potentials (ERPs) were recorded to trains of rapidly presented auditory and visual stimuli. ERPs in conditions in which Ss attended to different features of visual stimuli were compared with ERPs to the same type of stimuli when Ss attended to different features of auditory stimuli,

  17. Prepulse Inhibition of Auditory Cortical Responses in the Caudolateral Superior Temporal Gyrus in Macaca mulatta.

    Science.gov (United States)

    Chen, Zuyue; Parkkonen, Lauri; Wei, Jingkuan; Dong, Jin-Run; Ma, Yuanye; Carlson, Synnöve

    2018-04-01

    Prepulse inhibition (PPI) refers to a decreased response to a startling stimulus when another weaker stimulus precedes it. Most PPI studies have focused on the physiological startle reflex and fewer have reported the PPI of cortical responses. We recorded local field potentials (LFPs) in four monkeys and investigated whether the PPI of auditory cortical responses (alpha, beta, and gamma oscillations and evoked potentials) can be demonstrated in the caudolateral belt of the superior temporal gyrus (STGcb). We also investigated whether the presence of a conspecific, which draws attention away from the auditory stimuli, affects the PPI of auditory cortical responses. The PPI paradigm consisted of Pulse-only and Prepulse + Pulse trials that were presented randomly while the monkey was alone (ALONE) and while another monkey was present in the same room (ACCOMP). The LFPs to the Pulse were significantly suppressed by the Prepulse thus, demonstrating PPI of cortical responses in the STGcb. The PPI-related inhibition of the N1 amplitude of the evoked responses and cortical oscillations to the Pulse were not affected by the presence of a conspecific. In contrast, gamma oscillations and the amplitude of the N1 response to Pulse-only were suppressed in the ACCOMP condition compared to the ALONE condition. These findings demonstrate PPI in the monkey STGcb and suggest that the PPI of auditory cortical responses in the monkey STGcb is a pre-attentive inhibitory process that is independent of attentional modulation.

  18. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults.

    Directory of Open Access Journals (Sweden)

    Erich S Tusch

    Full Text Available The inhibitory deficit hypothesis of cognitive aging posits that older adults' inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1 observed under an auditory-ignore, but not auditory-attend condition, 2 attenuated in individuals with high executive capacity (EC, and 3 augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study's findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts.

  19. Neural correlates of auditory recognition memory in primate lateral prefrontal cortex.

    Science.gov (United States)

    Plakke, B; Ng, C-W; Poremba, A

    2013-08-06

    The neural underpinnings of working and recognition memory have traditionally been studied in the visual domain and these studies pinpoint the lateral prefrontal cortex (lPFC) as a primary region for visual memory processing (Miller et al., 1996; Ranganath et al., 2004; Kennerley and Wallis, 2009). Herein, we utilize single-unit recordings for the same region in monkeys (Macaca mulatta) but investigate a second modality examining auditory working and recognition memory during delayed matching-to-sample (DMS) performance. A large portion of neurons in the dorsal and ventral banks of the principal sulcus (area 46, 46/9) show DMS event-related activity to one or more of the following task events: auditory cues, memory delay, decision wait time, response, and/or reward portions. Approximately 50% of the neurons show evidence of auditory-evoked activity during the task and population activity demonstrated encoding of recognition memory in the form of match enhancement. However, neither robust nor sustained delay activity was observed. The neuronal responses during the auditory DMS task are similar in many respects to those found within the visual working memory domain, which supports the hypothesis that the lPFC, particularly area 46, functionally represents key pieces of information for recognition memory inclusive of decision-making, but regardless of modality. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Intracranial auditory detection and discrimination potentials as substrates of echoic memory in children.

    Science.gov (United States)

    Liasis, A; Towell, A; Boyd, S

    1999-03-01

    In children, intracranial responses to auditory detection and discrimination processes have not been reported. We, therefore, recorded intracranial event-related potentials (ERPs) to both standard and deviant tones and/or syllables in 4 children undergoing pre-surgical evaluation for epilepsy. ERPs to detection (mean latency = 63 ms) and discrimination (mean latency = 334 ms) were highly localized to areas surrounding the Sylvian fissure (SF). These potentials reflect activation of different neuronal populations and are suggested to contribute to the scalp recorded auditory N1 and mismatch negativity (MMN).

  1. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  2. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...

  3. Newborn hearing screening with transient evoked otoacoustic emissions and automatic auditory brainstem response

    Directory of Open Access Journals (Sweden)

    Renata Mota Mamede de Carvallo

    2008-09-01

    Full Text Available Objective: The aim of the present investigation was to check Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response tests applied together in regular nurseries and Newborn Intensive Care Units (NICU, as well as to describe and compare the results obtained in both groups. Methods: We tested 150 newborns from regular nurseries and 70 from NICU. Rresults: The newborn hearing screening results using Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response tests could be applied to all babies. The “pass” result for the group of babies from the nursery was 94.7% using Transient Evoked Otoacoustic Emissions and 96% using Automatic Auditory Brainstem Response. The newborn intensive care unit group obtained 87.1% on Transient Evoked Otoacoustic Emissions and 80% on the Automatic Auditory Brainstem Response, and there was no statistical difference between the procedures when the groups were evaluated individually. However, comparing the groups, Transient Evoked Otoacoustic Emissions were presented in 94.7% of the nursery babies and in 87.1% in the group from the newborn intensive care unit. Considering the Automatic Auditory Brainstem Response, we found 96 and 87%, respectively. Cconclusions: Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response had similar “pass” and “fail” results when the procedures were applied to neonates from the regular nursery, and the combined tests were more precise to detect hearing impairment in the newborn intensive care unit babies.

  4. Response properties of the refractory auditory nerve fiber.

    Science.gov (United States)

    Miller, C A; Abbas, P J; Robinson, B K

    2001-09-01

    The refractory characteristics of auditory nerve fibers limit their ability to accurately encode temporal information. Therefore, they are relevant to the design of cochlear prostheses. It is also possible that the refractory property could be exploited by prosthetic devices to improve information transfer, as refractoriness may enhance the nerve's stochastic properties. Furthermore, refractory data are needed for the development of accurate computational models of auditory nerve fibers. We applied a two-pulse forward-masking paradigm to a feline model of the human auditory nerve to assess refractory properties of single fibers. Each fiber was driven to refractoriness by a single (masker) current pulse delivered intracochlearly. Properties of firing efficiency, latency, jitter, spike amplitude, and relative spread (a measure of dynamic range and stochasticity) were examined by exciting fibers with a second (probe) pulse and systematically varying the masker-probe interval (MPI). Responses to monophasic cathodic current pulses were analyzed. We estimated the mean absolute refractory period to be about 330 micros and the mean recovery time constant to be about 410 micros. A significant proportion of fibers (13 of 34) responded to the probe pulse with MPIs as short as 500 micros. Spike amplitude decreased with decreasing MPI, a finding relevant to the development of computational nerve-fiber models, interpretation of gross evoked potentials, and models of more central neural processing. A small mean decrement in spike jitter was noted at small MPI values. Some trends (such as spike latency-vs-MPI) varied across fibers, suggesting that sites of excitation varied across fibers. Relative spread was found to increase with decreasing MPI values, providing direct evidence that stochastic properties of fibers are altered under conditions of refractoriness.

  5. Maturation of the auditory t-complex brain response across adolescence.

    Science.gov (United States)

    Mahajan, Yatin; McArthur, Genevieve

    2013-02-01

    Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  6. Quantifying stimulus-response rehabilitation protocols by auditory feedback in Parkinson's disease gait pattern

    Science.gov (United States)

    Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo

    2017-11-01

    External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and

  7. Auditory white noise reduces age-related fluctuations in balance.

    Science.gov (United States)

    Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R

    2016-09-06

    Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Event-Related Potential Patterns Associated with Hyperarousal in Gulf War Illness Syndrome Groups

    Science.gov (United States)

    Tillman, Gail D.; Calley, Clifford S.; Green, Timothy A.; Buhl, Virginia I.; Biggs, Melanie M.; Spence, Jeffrey S.; Briggs, Richard W.; Haley, Robert W.; Hart, John; Kraut, Michael A.

    2012-01-01

    An exaggerated response to emotional stimuli is one of several symptoms widely reported by veterans of the 1991 Persian Gulf War. Many have attributed these symptoms to post-war stress; others have attributed the symptoms to deployment-related exposures and associated damage to cholinergic, dopaminergic, and white matter systems. We collected event-related potential (ERP) data from 20 veterans meeting Haley criteria for Gulf War Syndromes 1–3 and from 8 matched Gulf War veteran controls, who were deployed but not symptomatic, while they performed an auditory three-condition oddball task with gunshot and lion roar sounds as the distractor stimuli. Reports of hyperarousal from the ill veterans were significantly greater than those from the control veterans; different ERP profiles emerged to account for their hyperarousability. Syndromes 2 and 3, who have previously shown brainstem abnormalities, show significantly stronger auditory P1 amplitudes, purported to indicate compromised cholinergic inhibitory gating in the reticular activating system. Syndromes 1 and 2, who have previously shown basal ganglia dysfunction, show significantly weaker P3a response to distractor stimuli, purported to indicate dysfunction of the dopaminergic contribution to their ability to inhibit distraction by irrelevant stimuli. All three syndrome groups showed an attenuated P3b to target stimuli, which could be secondary to both cholinergic and dopaminergic contributions or disruption of white matter integrity. PMID:22691951

  9. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  10. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Amelioration of Auditory Response by DA9801 in Diabetic Mouse

    Directory of Open Access Journals (Sweden)

    Yeong Ro Lee

    2015-01-01

    Full Text Available Diabetes mellitus (DM is a metabolic disease that involves disorders such as diabetic retinopathy, diabetic neuropathy, and diabetic hearing loss. Recently, neurotrophin has become a treatment target that has shown to be an attractive alternative in recovering auditory function altered by DM. The aim of this study was to evaluate the effect of DA9801, a mixture of Dioscorea nipponica and Dioscorea japonica extracts, in the auditory function damage produced in a STZ-induced diabetic model and to provide evidence of the mechanisms involved in enhancing these protective effects. We found a potential application of DA9801 on hearing impairment in the STZ-induced diabetic model, demonstrated by reducing the deterioration produced by DM in ABR threshold in response to clicks and normalizing wave I–IV latencies and Pa latencies in AMLR. We also show evidence that these effects might be elicited by inducing NGF related through Nr3c1 and Akt. Therefore, this result suggests that the neuroprotective effects of DA9801 on the auditory damage produced by DM may be affected by NGF increase resulting from Nr3c1 via Akt transformation.

  12. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    Science.gov (United States)

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  14. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    Directory of Open Access Journals (Sweden)

    Faten Mina

    Full Text Available Auditory steady state responses (ASSRs in cochlear implant (CI patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  15. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    Science.gov (United States)

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  16. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  18. Development of a Method to Compensate for Signal Quality Variations in Repeated Auditory Event-Related Potential Recordings

    Science.gov (United States)

    Paukkunen, Antti K. O.; Leminen, Miika M.; Sepponen, Raimo

    2010-01-01

    Reliable measurements are mandatory in clinically relevant auditory event-related potential (AERP)-based tools and applications. The comparability of the results gets worse as a result of variations in the remaining measurement error. A potential method is studied that allows optimization of the length of the recording session according to the concurrent quality of the recorded data. In this way, the sufficiency of the trials can be better guaranteed, which enables control of the remaining measurement error. The suggested method is based on monitoring the signal-to-noise ratio (SNR) and remaining measurement error which are compared to predefined threshold values. The SNR test is well defined, but the criterion for the measurement error test still requires further empirical testing in practice. According to the results, the reproducibility of average AERPs in repeated experiments is improved in comparison to a case where the number of recorded trials is constant. The test-retest reliability is not significantly changed on average but the between-subject variation in the value is reduced by 33–35%. The optimization of the number of trials also prevents excessive recordings which might be of practical interest especially in the clinical context. The efficiency of the method may be further increased by implementing online tools that improve data consistency. PMID:20407635

  19. Potencial evocado auditivo tardio relacionado a eventos (P300 na síndrome de Down Late auditory event-related evoked potential (P300 in Down's syndrome patients

    Directory of Open Access Journals (Sweden)

    Carla Patrícia Hernandez Alves Ribeiro César

    2010-04-01

    Full Text Available A síndrome de Down é causada pela trissomia do cromossomo 21 e está associada com alteração do processamento auditivo, distúrbio de aprendizagem e, provavelmente, início precoce de Doença de Alzheimer. OBJETIVO: Avaliar as latências e amplitudes do potencial evocado auditivo tardio relacionado a eventos (P300 e suas alterações em indivíduos jovens adultos com síndrome de Down. MATERIAL E MÉTODO: Estudo de caso prospectivo. Latências e amplitudes do P300 foram avaliadas em 17 indivíduos com síndrome de Down e 34 indivíduos sadios. RESULTADOS: Foram identificadas latências do P300 (N1, P2, N2 e P3 prolongadas e amplitude N2 - P3 diminuída nos indivíduos com síndrome de Down quando comparados ao grupo controle. CONCLUSÃO: Em indivíduos jovens adultos com síndrome de Down ocorre aumento das latências N1, P2, N2 e P3, e diminuição significativa da amplitude N2-P3 do potencial evocado auditivo tardio relacionado a eventos (P300, sugerindo prejuízo da integração da área de associação auditiva com as áreas corticais e subcorticais do sistema nervoso central.Down syndrome is caused by a trisomy of chromosome 21 and is associated with central auditory processing deficit, learning disability and, probably, early-onset Alzheimer's disease. AIM: to evaluate the latencies and amplitudes of evoked late auditory potential related to P300 events and their changes in young adults with Down's syndrome. MATERIALS AND METHODS: Prospective case study. P300 test latency and amplitudes were evaluated in 17 individuals with Down's syndrome and 34 healthy individuals. RESULTS The P300 latency (N1, P2, N2 and P3 was longer and the N2-P3 amplitude was lower in individuals with Down syndrome when compared to those in the control group. CONCLUSION: In young adults with Down syndrome, N1, P2, N2 and P3 latencies of late auditory evoked potential related to P300 events were prolonged, and N2 - P3 amplitudes were significantly reduced

  20. Simultaneity and Temporal Order Judgments Are Coded Differently and Change With Age: An Event-Related Potential Study

    Directory of Open Access Journals (Sweden)

    Aysha Basharat

    2018-04-01

    Full Text Available Multisensory integration is required for a number of daily living tasks where the inability to accurately identify simultaneity and temporality of multisensory events results in errors in judgment leading to poor decision-making and dangerous behavior. Previously, our lab discovered that older adults exhibited impaired timing of audiovisual events, particularly when making temporal order judgments (TOJs. Simultaneity judgments (SJs, however, were preserved across the lifespan. Here, we investigate the difference between the TOJ and SJ tasks in younger and older adults to assess neural processing differences between these two tasks and across the lifespan. Event-related potentials (ERPs were studied to determine between-task and between-age differences. Results revealed task specific differences in perceiving simultaneity and temporal order, suggesting that each task may be subserved via different neural mechanisms. Here, auditory N1 and visual P1 ERP amplitudes confirmed that unisensory processing of audiovisual stimuli did not differ between the two tasks within both younger and older groups, indicating that performance differences between tasks arise either from multisensory integration or higher-level decision-making. Compared to younger adults, older adults showed a sustained higher auditory N1 ERP amplitude response across SOAs, suggestive of broader response properties from an extended temporal binding window. Our work provides compelling evidence that different neural mechanisms subserve the SJ and TOJ tasks and that simultaneity and temporal order perception are coded differently and change with age.

  1. Musical metaphors: evidence for a spatial grounding of non-literal sentences describing auditory events.

    Science.gov (United States)

    Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2015-03-01

    This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  3. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses.

    Science.gov (United States)

    Molloy, Katharine; Griffiths, Timothy D; Chait, Maria; Lavie, Nilli

    2015-12-09

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory

  4. A new method for detecting interactions between the senses in event-related potentials

    DEFF Research Database (Denmark)

    Gondan, Matthias; Röder, B.

    2006-01-01

    Event-related potentials (ERPs) can be used in multisensory research to determine the point in time when different senses start to interact, for example, the auditory and the visual system. For this purpose, the ERP to bimodal stimuli (AV) is often compared to the sum of the ERPs to auditory (A......) and visual (V) stimuli: AV - (A + V). If the result is non-zero, this is interpreted as an indicator for multisensory interactions. Using this method, several studies have demonstrated auditory-visual interactions as early as 50 ms after stimulus onset. The subtraction requires that A, V, and AV do...... not contain common activity: This activity would be subtracted twice from one ERP and would, therefore, contaminate the result. In the present study, ERPs to unimodal, bimodal, and trimodal auditory, visual, and tactile stimuli (T) were recorded. We demonstrate that (T + TAV) - (TA + TV) is equivalent to AV...

  5. Changes in event-related potential functional networks predict traumatic brain injury in piglets.

    Science.gov (United States)

    Atlan, Lorre S; Lan, Ingrid S; Smith, Colin; Margulies, Susan S

    2018-06-01

    Traumatic brain injury is a leading cause of cognitive and behavioral deficits in children in the US each year. None of the current diagnostic tools, such as quantitative cognitive and balance tests, have been validated to identify mild traumatic brain injury in infants, adults and animals. In this preliminary study, we report a novel, quantitative tool that has the potential to quickly and reliably diagnose traumatic brain injury and which can track the state of the brain during recovery across multiple ages and species. Using 32 scalp electrodes, we recorded involuntary auditory event-related potentials from 22 awake four-week-old piglets one day before and one, four, and seven days after two different injury types (diffuse and focal) or sham. From these recordings, we generated event-related potential functional networks and assessed whether the patterns of the observed changes in these networks could distinguish brain-injured piglets from non-injured. Piglet brains exhibited significant changes after injury, as evaluated by five network metrics. The injury prediction algorithm developed from our analysis of the changes in the event-related potentials functional networks ultimately produced a tool with 82% predictive accuracy. This novel approach is the first application of auditory event-related potential functional networks to the prediction of traumatic brain injury. The resulting tool is a robust, objective and predictive method that offers promise for detecting mild traumatic brain injury, in particular because collecting event-related potentials data is noninvasive and inexpensive. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Linear and nonlinear auditory response properties of interneurons in a high-order avian vocal motor nucleus during wakefulness.

    Science.gov (United States)

    Raksin, Jonathan N; Glaze, Christopher M; Smith, Sarah; Schmidt, Marc F

    2012-04-01

    Motor-related forebrain areas in higher vertebrates also show responses to passively presented sensory stimuli. However, sensory tuning properties in these areas, especially during wakefulness, and their relation to perception, are poorly understood. In the avian song system, HVC (proper name) is a vocal-motor structure with auditory responses well defined under anesthesia but poorly characterized during wakefulness. We used a large set of stimuli including the bird's own song (BOS) and many conspecific songs (CON) to characterize auditory tuning properties in putative interneurons (HVC(IN)) during wakefulness. Our findings suggest that HVC contains a diversity of responses that vary in overall excitability to auditory stimuli, as well as bias in spike rate increases to BOS over CON. We used statistical tests to classify cells in order to further probe auditory responses, yielding one-third of neurons that were either unresponsive or suppressed and two-thirds with excitatory responses to one or more stimuli. A subset of excitatory neurons were tuned exclusively to BOS and showed very low linearity as measured by spectrotemporal receptive field analysis (STRF). The remaining excitatory neurons responded well to CON stimuli, although many cells still expressed a bias toward BOS. These findings suggest the concurrent presence of a nonlinear and a linear component to responses in HVC, even within the same neuron. These characteristics are consistent with perceptual deficits in distinguishing BOS from CON stimuli following lesions of HVC and other song nuclei and suggest mirror neuronlike qualities in which "self" (here BOS) is used as a referent to judge "other" (here CON).

  7. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  8. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    Directory of Open Access Journals (Sweden)

    Francisco Jose Alvarez

    Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  9. Response actions influence the categorization of directions in auditory space

    Directory of Open Access Journals (Sweden)

    Marcella de Castro Campos Velten

    2015-08-01

    Full Text Available Spatial region concepts such as front, back, left and right reflect our typical interaction with space, and the corresponding surrounding regions have different statuses in memory. We examined the representation of spatial directions in the auditory space, specifically in how far natural response actions, such as orientation movements towards a sound source, would affect the categorization of egocentric auditory space. While standing in the middle of a circle with 16 loudspeakers, participants were presented acoustic stimuli coming from the loudspeakers in randomized order, and verbally described their directions by using the concept labels front, back, left, right, front-right, front-left, back-right and back-left. Response actions varied in three blocked conditions: 1 facing front, 2 turning the head and upper body to face the stimulus, and 3 turning the head and upper body plus pointing with the hand and outstretched arm towards the stimulus. In addition to a protocol of the verbal utterances, motion capture and video recording generated a detailed corpus for subsequent analysis of the participants’ behavior. Chi-square tests revealed an effect of response condition for directions within the left and right sides. We conclude that movement-based response actions influence the representation of auditory space, especially within the sides’ regions.

  10. Low-frequency versus high-frequency synchronisation in chirp-evoked auditory brainstem responses

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Gøtsche-Rasmussen, Kristian

    2011-01-01

    This study investigates the frequency specific contribution to the auditory brainstem response (ABR) of chirp stimuli. Frequency rising chirps were designed to compensate for the cochlear traveling wave delay, and lead to larger wave-V amplitudes than for click stimuli as more auditory nerve fibr...

  11. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  12. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    Science.gov (United States)

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  13. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.

  14. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  15. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  16. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Is the auditory sensory memory sensitive to visual information?

    Science.gov (United States)

    Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène

    2005-10-01

    The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.

  18. Functional significance of the electrocorticographic auditory responses in the premotor cortex

    Directory of Open Access Journals (Sweden)

    Kazuyo eTanji

    2015-03-01

    Full Text Available Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS. The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the ‘sensory theory of speech production’, in which it was proposed that sensory representations are used to guide motor-articulatory processes.

  19. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  20. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  1. Hippocampal P3-like auditory event-related potentials are disrupted in a rat model of cholinergic degeneration in Alzheimer's disease: reversal by donepezil treatment.

    Science.gov (United States)

    Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe; Bastlund, Jesper Frank

    2014-01-01

    P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related to several psychiatric and neurological diseases, e.g., Alzheimer's disease (AD). However, only a very limited number of rodent studies have addressed the back-translational validity of the P3-like ERPs as suitable markers of cognition. Thus, the potential of rodent P3-like ERPs to predict pro-cognitive effects in humans remains to be fully validated. The current study characterizes P3-like ERPs in the 192-IgG-SAP (SAP) rat model of the cholinergic degeneration associated with AD. Following training in a combined auditory oddball and lever-press setup, rats were subjected to bilateral intracerebroventricular infusion of 1.25 μg SAP or PBS (sham lesion) and recording electrodes were implanted in hippocampal CA1. Relative to sham-lesioned rats, SAP-lesioned rats had significantly reduced amplitude of P3-like ERPs. P3 amplitude was significantly increased in SAP-treated rats following pre-treatment with 1 mg/kg donepezil. Infusion of SAP reduced the hippocampal choline acetyltransferase activity by 75%. Behaviorally defined cognitive performance was comparable between treatment groups. The present study suggests that AD-like deficits in P3-like ERPs may be mimicked by the basal forebrain cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup.

  2. Neural responses to complex auditory rhythms: the role of attending

    Directory of Open Access Journals (Sweden)

    Heather L Chapin

    2010-12-01

    Full Text Available The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus.

  3. Auditory laterality in a nocturnal, fossorial marsupial (Lasiorhinus latifrons) in response to bilateral stimuli.

    Science.gov (United States)

    Descovich, K A; Reints Bok, T E; Lisle, A T; Phillips, C J C

    2013-01-01

    Behavioural lateralisation is evident across most animal taxa, although few marsupial and no fossorial species have been studied. Twelve wombats (Lasiorhinus latifrons) were bilaterally presented with eight sounds from different contexts (threat, neutral, food) to test for auditory laterality. Head turns were recorded prior to and immediately following sound presentation. Behaviour was recorded for 150 seconds after presentation. Although sound differentiation was evident by the amount of exploration, vigilance, and grooming performed after different sound types, this did not result in different patterns of head turn direction. Similarly, left-right proportions of head turns, walking events, and food approaches in the post-sound period were comparable across sound types. A comparison of head turns performed before and after sound showed a significant change in turn direction (χ(2) (1)=10.65, p=.001) from a left preference during the pre-sound period (mean 58% left head turns, CI 49-66%) to a right preference in the post-sound (mean 43% left head turns, CI 40-45%). This provides evidence of a right auditory bias in response to the presentation of the sound. This study therefore demonstrates that laterality is evident in southern hairy-nosed wombats in response to a sound stimulus, although side biases were not altered by sounds of varying context.

  4. Transient and steady-state auditory gamma-band responses in first-degree relatives of people with autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Rojas Donald C

    2011-07-01

    Full Text Available Abstract Background Stimulus-related γ-band oscillations, which may be related to perceptual binding, are reduced in people with autism spectrum disorders (ASD. The purpose of this study was to examine auditory transient and steady-state γ-band findings in first-degree relatives of people with ASD to assess the potential familiality of these findings in ASD. Methods Magnetoencephalography (MEG recordings in 21 parents who had a child with an autism spectrum disorder (pASD and 20 healthy adult control subjects (HC were obtained. Gamma-band phase locking factor (PLF, and evoked and induced power to 32, 40 and 48 Hz amplitude-modulated sounds were measured for transient and steady-state responses. Participants were also tested on a number of behavioral and cognitive assessments related to the broad autism phenotype (BAP. Results Reliable group differences were seen primarily for steady-state responses. In the left hemisphere, pASD subjects exhibited lower phase-locked steady-state power in all three conditions. Total γ-band power, including the non-phase-locked component, was also reduced in the pASD group. In addition, pASD subjects had significantly lower PLF than the HC group. Correlations were seen between MEG measures and BAP measures. Conclusions The reduction in steady-state γ-band responses in the pASD group is consistent with previous results for children with ASD. Steady-state responses may be more sensitive than transient responses to phase-locking errors in ASD. Together with the lower PLF and phase-locked power in first-degree relatives, correlations between γ-band measures and behavioral measures relevant to the BAP highlight the potential of γ-band deficits as a potential new autism endophenotype.

  5. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  7. Localized brain activation related to the strength of auditory learning in a parrot.

    Directory of Open Access Journals (Sweden)

    Hiroko Eda-Fujiwara

    Full Text Available Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus, a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  8. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Hostile attribution biases for relationally provocative situations and event-related potentials.

    Science.gov (United States)

    Godleski, Stephanie A; Ostrov, Jamie M; Houston, Rebecca J; Schlienz, Nicolas J

    2010-04-01

    This exploratory study investigates how hostile attribution biases for relationally provocative situations may be related to neurocognitive processing using the P300 event-related potential. Participants were 112 (45 women) emerging adults enrolled in a large, public university in upstate New York. Participants completed self-report measures on relational aggression and hostile attribution biases and performed an auditory perseveration task to elicit the P300. It was found that hostile attribution biases for relational provocation situations was associated with a larger P300 amplitude above and beyond the role of hostile attribution biases for instrumental situations, relational aggression, and gender. Larger P300 amplitude is interpreted to reflect greater allocation of cognitive resources or enhanced "attending" to salient stimuli. Implications for methodological approaches to studying aggression and hostile attribution biases and for theory are discussed, as well as implications for the fields of developmental psychology and psychopathology. Copyright 2010 Elsevier B.V. All rights reserved.

  10. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  11. Distraction task rather than focal attention modulates gamma activity associated with auditory steady-state responses (ASSRs)

    DEFF Research Database (Denmark)

    Griskova-Bulanova, Inga; Ruksenas, Osvaldas; Dapsys, Kastytis

    2011-01-01

    To explore the modulation of auditory steady-state response (ASSR) by experimental tasks, differing in attentional focus and arousal level.......To explore the modulation of auditory steady-state response (ASSR) by experimental tasks, differing in attentional focus and arousal level....

  12. Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2016-02-01

    Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  13. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.

    Science.gov (United States)

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  14. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  15. Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.

    Science.gov (United States)

    Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun

    2018-01-01

    When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.

  16. Is GABA neurotransmission enhanced in auditory thalamus relative to inferior colliculus?

    Science.gov (United States)

    Cai, Rui; Kalappa, Bopanna I.; Brozoski, Thomas J.; Ling, Lynne L.

    2013-01-01

    Gamma-aminobutyric acid (GABA) is the major inhibitory neurotransmitter in the central auditory system. Sensory thalamic structures show high levels of non-desensitizing extrasynaptic GABAA receptors (GABAARs) and a reduction in the redundancy of coded information. The present study compared the inhibitory potency of GABA acting at GABAARs between the inferior colliculus (IC) and the medial geniculate body (MGB) using quantitative in vivo, in vitro, and ex vivo experimental approaches. In vivo single unit studies compared the ability of half maximal inhibitory concentrations of GABA to inhibit sound-evoked temporal responses, and found that GABA was two to three times (P GABA levels and suggested a trend towards higher GABA concentrations in MGB than in IC. Collectively, these studies suggest that, per unit GABA, high affinity extrasynaptic and synaptic GABAARs confer a significant inhibitory GABAAR advantage to MGB neurons relative to IC neurons. This increased GABA sensitivity likely underpins the vital filtering role of auditory thalamus. PMID:24155003

  17. P300 as a measure of processing capacity in auditory and visual domains in specific language impairment.

    Science.gov (United States)

    Evans, Julia L; Selinger, Craig; Pollak, Seth D

    2011-05-10

    This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9-14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  19. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  20. Post-spike hyperpolarization participates in the formation of auditory behavior-related response patterns of inferior collicular neurons in Hipposideros pratti.

    Science.gov (United States)

    Li, Y-L; Fu, Z-Y; Yang, M-J; Wang, J; Peng, K; Yang, L-J; Tang, J; Chen, Q-C

    2015-03-19

    To probe the mechanism underlying the auditory behavior-related response patterns of inferior collicular neurons to constant frequency-frequency modulation (CF-FM) stimulus in Hipposideros pratti, we studied the role of post-spike hyperpolarization (PSH) in the formation of response patterns. Neurons obtained by in vivo extracellular (N=145) and intracellular (N=171) recordings could be consistently classified into single-on (SO) and double-on (DO) neurons. Using intracellular recording, we found that both SO and DO neurons have a PSH with different durations. Statistical analysis showed that most SO neurons had a longer PSH duration than DO neurons (p<0.01). These data suggested that the PSH directly participated in the formation of SO and DO neurons, and the PSH elicited by the CF component was the main synaptic mechanism underlying the SO and DO response patterns. The possible biological significance of these findings relevant to bat echolocation is discussed. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Neuroscience illuminating the influence of auditory or phonological intervention on language-related deficits

    Directory of Open Access Journals (Sweden)

    Sari eYlinen

    2015-02-01

    Full Text Available Remediation programs for language-related learning deficits are urgently needed to enable equal opportunities in education. To meet this need, different training and intervention programs have been developed. Here we review, from an educational perspective, studies that have explored the neural basis of behavioral changes induced by auditory or phonological training in dyslexia, specific language impairment (SLI, and language-learning impairment (LLI. Training has been shown to induce plastic changes in deficient neural networks. In dyslexia, these include, most consistently, increased or normalized activation of previously hypoactive inferior frontal and occipito-temporal areas. In SLI and LLI, studies have shown the strengthening of previously weak auditory brain responses as a result of training. The combination of behavioral and brain measures of remedial gains has potential to increase the understanding of the causes of language-related deficits, which may help to target remedial interventions more accurately to the core problem.

  2. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  3. Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus

    Science.gov (United States)

    Deng, Zishan; Gao, Yuan; Li, Ting

    2018-02-01

    As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.

  4. The Auditory Enhancement Effect is Not Reflected in the 80-Hz Auditory Steady-State Response

    OpenAIRE

    Carcagno, Samuele; Plack, Christopher J.; Portron, Arthur; Semal, Catherine; Demany, Laurent

    2014-01-01

    The perceptual salience of a target tone presented in a multitone background is increased by the presentation of a precursor sound consisting of the multitone background alone. It has been proposed that this “enhancement” phenomenon results from an effective amplification of the neural response to the target tone. In this study, we tested this hypothesis in humans, by comparing the auditory steady-state response (ASSR) to a target tone that was enhanced by a precursor sound with the ASSR to a...

  5. Neural Hyperactivity of the Central Auditory System in Response to Peripheral Damage

    Directory of Open Access Journals (Sweden)

    Yi Zhao

    2016-01-01

    Full Text Available It is increasingly appreciated that cochlear pathology is accompanied by adaptive responses in the central auditory system. The cause of cochlear pathology varies widely, and it seems that few commonalities can be drawn. In fact, despite intricate internal neuroplasticity and diverse external symptoms, several classical injury models provide a feasible path to locate responses to different peripheral cochlear lesions. In these cases, hair cell damage may lead to considerable hyperactivity in the central auditory pathways, mediated by a reduction in inhibition, which may underlie some clinical symptoms associated with hearing loss, such as tinnitus. Homeostatic plasticity, the most discussed and acknowledged mechanism in recent years, is most likely responsible for excited central activity following cochlear damage.

  6. Test-retest reliability of the 40 Hz EEG auditory steady-state response.

    Directory of Open Access Journals (Sweden)

    Kristina L McFadden

    Full Text Available Auditory evoked steady-state responses are increasingly being used as a marker of brain function and dysfunction in various neuropsychiatric disorders, but research investigating the test-retest reliability of this response is lacking. The purpose of this study was to assess the consistency of the auditory steady-state response (ASSR across sessions. Furthermore, the current study aimed to investigate how the reliability of the ASSR is impacted by stimulus parameters and analysis method employed. The consistency of this response across two sessions spaced approximately 1 week apart was measured in nineteen healthy adults using electroencephalography (EEG. The ASSR was entrained by both 40 Hz amplitude-modulated white noise and click train stimuli. Correlations between sessions were assessed with two separate analytical techniques: a channel-level analysis across the whole-head array and b signal-space projection from auditory dipoles. Overall, the ASSR was significantly correlated between sessions 1 and 2 (p<0.05, multiple comparison corrected, suggesting adequate test-retest reliability of this response. The current study also suggests that measures of inter-trial phase coherence may be more reliable between sessions than measures of evoked power. Results were similar between the two analysis methods, but reliability varied depending on the presented stimulus, with click train stimuli producing more consistent responses than white noise stimuli.

  7. Multisensory stimuli improve relative localisation judgments compared to unisensory auditory or visual stimuli

    OpenAIRE

    Bizley, Jennifer; Wood, Katherine; Freeman, Laura

    2018-01-01

    Observers performed a relative localisation task in which they reported whether the second of two sequentially presented signals occurred to the left or right of the first. Stimuli were detectability-matched auditory, visual, or auditory-visual signals and the goal was to compare changes in performance with eccentricity across modalities. Visual performance was superior to auditory at the midline, but inferior in the periphery, while auditory-visual performance exceeded both at all locations....

  8. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  9. Benzodiazepine temazepam suppresses the transient auditory 40-Hz response amplitude in humans.

    Science.gov (United States)

    Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H

    1999-06-18

    To discern the role of the GABA(A) receptors in the generation and attentive modulation of the transient auditory 40-Hz response, the effects of the benzodiazepine temazepam (10 mg) were studied in 10 healthy social drinkers, using a double-blind placebo-controlled design. Three hundred Hertz standard and 330 Hz rare deviant tones were presented to the left, and 1000 Hz standards and 1100 Hz deviants to the right ear of the subjects. Subjects attended to a designated ear and were to detect deviants therein while ignoring tones to the other. Temazepam significantly suppressed the amplitude of the 40-Hz response, the effect being equal for attended and non-attended tone responses. This suggests involvement of GABA(A) receptors in transient auditory 40-Hz response generation, however, not in the attentive modulation of the 40-Hz response.

  10. Effects of an NMDA antagonist on the auditory mismatch negativity response to transcranial direct current stimulation.

    Science.gov (United States)

    Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner

    2017-05-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.

  11. No meditation-related changes in the auditory N1 during first-time meditation.

    Science.gov (United States)

    Barnes, L J; McArthur, G M; Biedermann, B A; de Lissa, P; Polito, V; Badcock, N A

    2018-05-01

    Recent studies link meditation expertise with enhanced low-level attention, measured through auditory event-related potentials (ERPs). In this study, we tested the reliability and validity of a recent finding that the N1 ERP in first-time meditators is smaller during meditation than non-meditation - an effect not present in long-term meditators. In the first experiment, we replicated the finding in first-time meditators. In two subsequent experiments, we discovered that this finding was not due to stimulus-related instructions, but was explained by an effect of the order of conditions. Extended exposure to the same tones has been linked with N1 decrement in other studies, and may explain N1 decrement across our two conditions. We give examples of existing meditation and ERP studies that may include similar condition order effects. The role of condition order among first-time meditators in this study indicates the importance of counterbalancing meditation and non-mediation conditions in meditation studies that use event-related potentials. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Evidence for a neurophysiologic auditory deficit in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Liasis, A; Bamiou, D E; Boyd, S; Towell, A

    2006-07-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of epilepsy. Recent studies have questioned the benign nature of BECTS, as they have revealed neuropsychological deficits in many domains including language. The aim of this study was to investigate whether the epileptic discharges during the night have long-term effects on auditory processing, as reflected on electrophysiological measures, during the day, which could underline the language deficits. In order to address these questions we recorded base line electroencephalograms (EEG), sleep EEG and auditory event related potentials in 12 children with BECTS and in age- and gender-matched controls. In the children with BECTS, 5 had unilateral and 3 had bilateral spikes. In the 5 patients with unilateral spikes present during sleep, an asymmetry of the auditory event related component (P85-120) was observed contralateral to the side of epileptiform activity compared to the normal symmetrical vertex distribution that was noted in all controls and in 3 the children with bilateral spikes. In all patients the peak to peak amplitude of this event related potential component was statistically greater compared to the controls. Analysis of subtraction waveforms (deviant - standard) revealed no evidence of a mismatch negativity component in any of the children with BECTS. We propose that the abnormality of P85-120 and the absence of mismatch negativity during wake recordings in this group may arise in response to the long-term effects of spikes occurring during sleep, resulting in disruption of the evolution and maintenance of echoic memory traces. These results may indicate that patients with BECTS have abnormal processing of auditory information at a sensory level ipsilateral to the hemisphere evoking spikes during sleep.

  13. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  14. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  15. Social interaction with a tutor modulates responsiveness of specific auditory neurons in juvenile zebra finches.

    Science.gov (United States)

    Yanagihara, Shin; Yazaki-Sugiyama, Yoko

    2018-04-12

    Behavioral states of animals, such as observing the behavior of a conspecific, modify signal perception and/or sensations that influence state-dependent higher cognitive behavior, such as learning. Recent studies have shown that neuronal responsiveness to sensory signals is modified when animals are engaged in social interactions with others or in locomotor activities. However, how these changes produce state-dependent differences in higher cognitive function is still largely unknown. Zebra finches, which have served as the premier songbird model, learn to sing from early auditory experiences with tutors. They also learn from playback of recorded songs however, learning can be greatly improved when song models are provided through social communication with tutors (Eales, 1989; Chen et al., 2016). Recently we found a subset of neurons in the higher-level auditory cortex of juvenile zebra finches that exhibit highly selective auditory responses to the tutor song after song learning, suggesting an auditory memory trace of the tutor song (Yanagihara and Yazaki-Sugiyama, 2016). Here we show that auditory responses of these selective neurons became greater when juveniles were paired with their tutors, while responses of non-selective neurons did not change. These results suggest that social interaction modulates cortical activity and might function in state-dependent song learning. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Higher dietary diversity is related to better visual and auditory sustained attention.

    Science.gov (United States)

    Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed

    2016-04-01

    Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (Pvisual and auditory sustained attention.

  17. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  19. Music-induced positive mood broadens the scope of auditory attention.

    Science.gov (United States)

    Putkinen, Vesa; Makkonen, Tommi; Eerola, Tuomas

    2017-07-01

    Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. © The Author (2017). Published by Oxford University Press.

  20. Speech-evoked auditory brainstem responses in children with hearing loss.

    Science.gov (United States)

    Koravand, Amineh; Al Osman, Rida; Rivest, Véronique; Poulin, Catherine

    2017-08-01

    The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  2. Development of Brainstem-Evoked Responses in Congenital Auditory Deprivation

    Directory of Open Access Journals (Sweden)

    J. Tillein

    2012-01-01

    Full Text Available To compare the development of the auditory system in hearing and completely acoustically deprived animals, naive congenitally deaf white cats (CDCs and hearing controls (HCs were investigated at different developmental stages from birth till adulthood. The CDCs had no hearing experience before the acute experiment. In both groups of animals, responses to cochlear implant stimulation were acutely assessed. Electrically evoked auditory brainstem responses (E-ABRs were recorded with monopolar stimulation at different current levels. CDCs demonstrated extensive development of E-ABRs, from first signs of responses at postnatal (p.n. day 3 through appearance of all waves of brainstem response at day 8 p.n. to mature responses around day 90 p.n.. Wave I of E-ABRs could not be distinguished from the artifact in majority of CDCs, whereas in HCs, it was clearly separated from the stimulus artifact. Waves II, III, and IV demonstrated higher thresholds in CDCs, whereas this difference was not found for wave V. Amplitudes of wave III were significantly higher in HCs, whereas wave V amplitudes were significantly higher in CDCs. No differences in latencies were observed between the animal groups. These data demonstrate significant postnatal subcortical development in absence of hearing, and also divergent effects of deafness on early waves II–IV and wave V of the E-ABR.

  3. Non-Monotonic Relation Between Noise Exposure Severity and Neuronal Hyperactivity in the Auditory Midbrain

    Directory of Open Access Journals (Sweden)

    Lara Li Hesse

    2016-08-01

    Full Text Available The occurrence of tinnitus can be linked to hearing loss in the majority of cases, but there is nevertheless a large degree of unexplained heterogeneity in the relation between hearing loss and tinnitus. Part of the problem might be that hearing loss is usually quantified in terms of increased hearing thresholds, which only provides limited information about the underlying cochlear damage. Moreover, noise exposure that does not cause hearing threshold loss can still lead to hidden hearing loss (HHL, i.e. functional deafferentation of auditory nerve fibres (ANFs through loss of synaptic ribbons in inner hair cells. Whilst it is known that increased hearing thresholds can trigger increases in spontaneous neural activity in the central auditory system, i.e. a putative neural correlate of tinnitus, the central effects of HHL have not yet been investigated. Here, we exposed mice to octave-band noise at 100 and 105 dB SPL, to generate HHL and permanent increases of hearing thresholds, respectively. Deafferentation of ANFs was confirmed through measurement of auditory brainstem responses and cochlear immunohistochemistry. Acute extracellular recordings from the auditory midbrain (inferior colliculus demonstrated increases in spontaneous neuronal activity (a putative neural correlate of tinnitus in both groups. Surprisingly the increase in spontaneous activity was most pronounced in the mice with HHL, suggesting that the relation between hearing loss and neuronal hyperactivity might be more complex than currently understood. Our computational model indicated that these differences in neuronal hyperactivity could arise from different degrees of deafferentation of low-threshold ANFs in the two exposure groups.Our results demonstrate that HHL is sufficient to induce changes in central auditory processing, and they also indicate a non-monotonic relationship between cochlear damage and neuronal hyperactivity, suggesting an explanation for why tinnitus might

  4. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  5. Automatic detection of lexical change: an auditory event-related potential study.

    Science.gov (United States)

    Muller-Gass, Alexandra; Roye, Anja; Kirmse, Ursula; Saupe, Katja; Jacobsen, Thomas; Schröger, Erich

    2007-10-29

    We investigated the detection of rare task-irrelevant changes in the lexical status of speech stimuli. Participants performed a nonlinguistic task on word and pseudoword stimuli that occurred, in separate conditions, rarely or frequently. Task performance for pseudowords was deteriorated relative to words, suggesting unintentional lexical analysis. Furthermore, rare word and pseudoword changes had a similar effect on the event-related potentials, starting as early as 165 ms. This is the first demonstration of the automatic detection of change in lexical status that is not based on a co-occurring acoustic change. We propose that, following lexical analysis of the incoming stimuli, a mental representation of the lexical regularity is formed and used as a template against which lexical change can be detected.

  6. Neural effects of cognitive control load on auditory selective attention.

    Science.gov (United States)

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali

    2014-08-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Feature conjunctions and auditory sensory memory.

    Science.gov (United States)

    Sussman, E; Gomes, H; Nousak, J M; Ritter, W; Vaughan, H G

    1998-05-18

    This study sought to obtain additional evidence that transient auditory memory stores information about conjunctions of features on an automatic basis. The mismatch negativity of event-related potentials was employed because its operations are based on information that is stored in transient auditory memory. The mismatch negativity was found to be elicited by a tone that differed from standard tones in a combination of its perceived location and frequency. The result lends further support to the hypothesis that the system upon which the mismatch negativity relies processes stimuli in an holistic manner. Copyright 1998 Elsevier Science B.V.

  8. Auditory evoked responses to binaural beat illusion: stimulus generation and the derivation of the Binaural Interaction Component (BIC).

    Science.gov (United States)

    Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena

    2011-01-01

    Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.

  9. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  10. Long-term evolution of brainstem electrical evoked responses to sound after restricted ablation of the auditory cortex.

    Directory of Open Access Journals (Sweden)

    Verónica Lamas

    Full Text Available INTRODUCTION: This study aimed to assess the top-down control of sound processing in the auditory brainstem of rats. Short latency evoked responses were analyzed after unilateral or bilateral ablation of auditory cortex. This experimental paradigm was also used towards analyzing the long-term evolution of post-lesion plasticity in the auditory system and its ability to self-repair. METHOD: Auditory cortex lesions were performed in rats by stereotactically guided fine-needle aspiration of the cerebrocortical surface. Auditory Brainstem Responses (ABR were recorded at post-surgery day (PSD 1, 7, 15 and 30. Recordings were performed under closed-field conditions, using click trains at different sound intensity levels, followed by statistical analysis of threshold values and ABR amplitude and latency variables. Subsequently, brains were sectioned and immunostained for GAD and parvalbumin to assess the location and extent of lesions accurately. RESULTS: Alterations in ABR variables depended on the type of lesion and post-surgery time of ABR recordings. Accordingly, bilateral ablations caused a statistically significant increase in thresholds at PSD1 and 7 and a decrease in waves amplitudes at PSD1 that recover at PSD7. No effects on latency were noted at PSD1 and 7, whilst recordings at PSD15 and 30 showed statistically significant decreases in latency. Conversely, unilateral ablations had no effect on auditory thresholds or latencies, while wave amplitudes only decreased at PSD1 strictly in the ipsilateral ear. CONCLUSION: Post-lesion plasticity in the auditory system acts in two time periods: short-term period of decreased sound sensitivity (until PSD7, most likely resulting from axonal degeneration; and a long-term period (up to PSD7, with changes in latency responses and recovery of thresholds and amplitudes values. The cerebral cortex may have a net positive gain on the auditory pathway response to sound.

  11. Human event-related brain potentials to auditory periodic noise stimuli.

    Science.gov (United States)

    Kaernbach, C; Schröger, E; Gunter, T C

    1998-02-06

    Periodic noise is perceived as different from ordinary non-repeating noise due to the involvement of echoic memory. Since this stimulus does not contain simple physical cues (such as onsets or spectral shape) that might obscure sensory memory interpretations, it is a valuable tool to study sensory memory functions. We demonstrated for the first time that the processing of periodic noise can be tapped by event-related brain potentials (ERPs). Human subjects received repeating segments of noise embedded in non-repeating noise. They were instructed to detect the periodicity inherent to the stimulation. We observed a central negativity time-locked on the periodic segment that correlated to the subjects behavioral performance in periodicity detection. It is argued that the ERP result indicates an enhancement of sensory-specific processing.

  12. Neural network approach in multichannel auditory event-related potential analysis.

    Science.gov (United States)

    Wu, F Y; Slater, J D; Ramsay, R E

    1994-04-01

    Even though there are presently no clearly defined criteria for the assessment of P300 event-related potential (ERP) abnormality, it is strongly indicated through statistical analysis that such criteria exist for classifying control subjects and patients with diseases resulting in neuropsychological impairment such as multiple sclerosis (MS). We have demonstrated the feasibility of artificial neural network (ANN) methods in classifying ERP waveforms measured at a single channel (Cz) from control subjects and MS patients. In this paper, we report the results of multichannel ERP analysis and a modified network analysis methodology to enhance automation of the classification rule extraction process. The proposed methodology significantly reduces the work of statistical analysis. It also helps to standardize the criteria of P300 ERP assessment and facilitate the computer-aided analysis on neuropsychological functions.

  13. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  14. Examining age-related differences in auditory attention control using a task-switching procedure.

    Science.gov (United States)

    Lawo, Vera; Koch, Iring

    2014-03-01

    Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.

  15. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  16. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  17. Detection of stimulus deviance within primate primary auditory cortex: intracortical mechanisms of mismatch negativity (MMN) generation.

    Science.gov (United States)

    Javitt, D C; Steinschneider, M; Schroeder, C E; Vaughan, H G; Arezzo, J C

    1994-12-26

    Mismatch negativity (MMN) is a cognitive, auditory event-related potential (AEP) that reflects preattentive detection of stimulus deviance and indexes the operation of the auditory sensory ('echoic') memory system. MMN is elicited most commonly in an auditory oddball paradigm in which a sequence of repetitive standard stimuli is interrupted infrequently and unexpectedly by a physically deviant 'oddball' stimulus. Electro- and magnetoencephalographic dipole mapping studies have localized the generators of MMN to supratemporal auditory cortex in the vicinity of Heschl's gyrus, but have not determined the degree to which MMN reflects activation within primary auditory cortex (AI) itself. The present study, using moveable multichannel electrodes inserted acutely into superior temporal plane, demonstrates a significant contribution of AI to scalp-recorded MMN in the monkey, as reflected by greater response of AI to loud or soft clicks presented as deviants than to the same stimuli presented as repetitive standards. The MMN-like activity was localized primarily to supragranular laminae within AI. Thus, standard and deviant stimuli elicited similar degrees of initial, thalamocortical excitation. In contrast, responses within supragranular cortex were significantly larger to deviant stimuli than to standards. No MMN-like activity was detected in a limited number to passes that penetrated anterior and medial to AI. AI plays a well established role in the decoding of the acoustic properties of individual stimuli. The present study demonstrates that primary auditory cortex also plays an important role in processing the relationships between stimuli, and thus participates in cognitive, as well as purely sensory, processing of auditory information.

  18. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  19. Crossmodal effects of Guqin and piano music on selective attention: an event-related potential study.

    Science.gov (United States)

    Zhu, Weina; Zhang, Junjun; Ding, Xiaojun; Zhou, Changle; Ma, Yuanye; Xu, Dan

    2009-11-27

    To compare the effects of music from different cultural environments (Guqin: Chinese music; piano: Western music) on crossmodal selective attention, behavioral and event-related potential (ERP) data in a standard two-stimulus visual oddball task were recorded from Chinese subjects in three conditions: silence, Guqin music or piano music background. Visual task data were then compared with auditory task data collected previously. In contrast with the results of the auditory task, the early (N1) and late (P300) stages exhibited no differences between Guqin and piano backgrounds during the visual task. Taking our previous study and this study together, we can conclude that: although the cultural-familiar music influenced selective attention both in the early and late stages, these effects appeared only within a sensory modality (auditory) but not in cross-sensory modalities (visual). Thus, the musical cultural factor is more obvious in intramodal than in crossmodal selective attention.

  20. Age-related hearing loss: Aquaporin 4 gene expression changes in the mouse cochlea and auditory midbrain

    Science.gov (United States)

    Christensen, Nathan; D'Souza, Mary; Zhu, Xiaoxia; Frisina, Robert D.

    2009-01-01

    Presbycusis – age-related hearing loss, is the number one communication disorder, and one of the top three chronic medical conditions of our aged population. Aquaporins, particularly aquaporin 4 (Aqp4), are membrane proteins with important roles in water and ion flux across cell membranes, including cells of the inner ear and pathways of the brain used for hearing. To more fully understand the biological bases of presbycusis, 39 CBA mice, a well-studied animal model of presbycusis, underwent non-invasive hearing testing as a function of sound frequency (auditory brainstem response – ABR thresholds, and distortion-product otoacoustic emission – DPOAE magnitudes), and were clustered into four groups based on age and hearing ability. Aqp4 gene expression, as determined by genechip microarray analysis and quantitative real-time PCR, was compared to the young adult control group in the three older groups: middle aged with good hearing, old age with mild presbycusis, and old age with severe presbycusis. Linear regression and ANOVA showed statistically significant changes in Aqp4 gene expression and ABR and DPOAE hearing status in the cochlea and auditory midbrain – inferior colliculus. Down-regulation in the cochlea was seen, and an initial down-, then up-regulation was discovered for the inferior colliculus Aqp4 expression. It is theorized that these changes in Aqp4 gene expression represent an age-related disruption of ion flux in the fluids of the cochlea that are responsible for ionic gradients underlying sound transduction in cochlear hair cells necessary for hearing. In regard to central auditory processing at the level of the auditory midbrain, aquaporin gene expression changes may affect neurotransmitter cycling involving supporting cells, thus impairing complex sound neural processing with age. PMID:19070604

  1. Newborn hearing screening with transient evoked otoacoustic emissions and automatic auditory brainstem response

    OpenAIRE

    Renata Mota Mamede de Carvallo; Carla Gentile Matas; Isabela de Souza Jardim

    2008-01-01

    Objective: The aim of the present investigation was to check Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response tests applied together in regular nurseries and Newborn Intensive Care Units (NICU), as well as to describe and compare the results obtained in both groups. Methods: We tested 150 newborns from regular nurseries and 70 from NICU. Rresults: The newborn hearing screening results using Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem...

  2. Visual form predictions facilitate auditory processing at the N1.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  3. Brain stem auditory evoked responses in chronic alcoholics.

    OpenAIRE

    Chan, Y W; McLeod, J G; Tuck, R R; Feary, P A

    1985-01-01

    Brain stem auditory evoked responses (BAERs) were performed on 25 alcoholic patients with Wernicke-Korsakoff syndrome, 56 alcoholic patients without Wernicke-Korsakoff syndrome, 24 of whom had cerebellar ataxia, and 37 control subjects. Abnormal BAERs were found in 48% of patients with Wernicke-Korsakoff syndrome, in 25% of alcoholic patients without Wernicke-Korsakoff syndrome but with cerebellar ataxia, and in 13% of alcoholic patients without Wernicke-Korsakoff syndrome or ataxia. The mean...

  4. Comparison of Auditory Brainstem Response in Noise Induced Tinnitus and Non-Tinnitus Control Subjects

    Directory of Open Access Journals (Sweden)

    Ghassem Mohammadkhani

    2009-12-01

    Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.

  5. Functional magnetic resonance imaging measure of automatic and controlled auditory processing

    OpenAIRE

    Mitchell, Teresa V.; Morey, Rajendra A.; Inan, Seniha; Belger, Aysenil

    2005-01-01

    Activity within fronto-striato-temporal regions during processing of unattended auditory deviant tones and an auditory target detection task was investigated using event-related functional magnetic resonance imaging. Activation within the middle frontal gyrus, inferior frontal gyrus, anterior cingulate gyrus, superior temporal gyrus, thalamus, and basal ganglia were analyzed for differences in activity patterns between the two stimulus conditions. Unattended deviant tones elicited robust acti...

  6. Increased reaction time variability in attention-deficit hyperactivity disorder as a response-related phenomenon: evidence from single-trial event-related potentials.

    Science.gov (United States)

    Saville, Christopher W N; Feige, Bernd; Kluckert, Christian; Bender, Stephan; Biscaldi, Monica; Berger, Andrea; Fleischhaker, Christian; Henighausen, Klaus; Klein, Christoph

    2015-07-01

    Increased intra-subject variability (ISV) in reaction times (RTs) is a promising endophenotype for attention-deficit hyperactivity disorder (ADHD) and among the most robust hallmarks of the disorder. ISV has been assumed to represent an attentional deficit, either reflecting lapses in attention or increased neural noise. Here, we use an innovative single-trial event-related potential approach to assess whether the increased ISV associated with ADHD is indeed attributable to attention, or whether it is related to response-related processing. We measured electroencephalographic responses to working memory oddball tasks in patients with ADHD (N = 20, aged 11.3 ± 1.1) and healthy controls (N = 25, aged 11.7 ± 1.1), and analysed these data with a recently developed method of single-trial event-related potential analysis. Estimates of component latency variability were computed for the stimulus-locked and response-locked forms of the P3b and the lateralised readiness potential (LRP). ADHD patients showed significantly increased ISV in behavioural ISV. This increased ISV was paralleled by an increase in variability in response-locked event-related potential latencies, while variability in stimulus-locked latencies was equivalent between groups. This result held across the P3b and LRP. Latency of all components predicted RTs on a single-trial basis, confirming that all were relevant for speed of processing. These data suggest that the increased ISV found in ADHD could be associated with response-end, rather than stimulus-end processes, in contrast to prevailing conceptions about the endophenotype. This mental chronometric approach may also be useful for exploring whether the existing lack of specificity of ISV to particular psychiatric conditions can be improved upon. © 2014 Association for Child and Adolescent Mental Health.

  7. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  8. The Effects of Eye-Closure and “Ear-Closure” on Recall of Visual and Auditory Aspects of a Criminal Event

    Directory of Open Access Journals (Sweden)

    Annelies Vredeveldt

    2012-05-01

    Full Text Available Previous research has shown that closing the eyes can facilitate recall of semantic and episodic information. Here, two experiments are presented which investigate the theoretical underpinnings of the eye-closure effect and its auditory equivalent, the “ear-closure” effect. In Experiment 1, participants viewed a violent videotaped event and were subsequently interviewed about the event with eyes open or eyes closed. Eye-closure was found to have modality-general benefits on coarse-grain correct responses, but modality-specific effects on fine-grain correct recall and incorrect recall (increasing the former and decreasing the latter. In Experiment 2, participants viewed the same event and were subsequently interviewed about it, either in quiet conditions or while hearing irrelevant speech. Contrary to expectations, irrelevant speech did not significantly impair recall performance. This null finding might be explained by the absence of social interaction during the interview in Experiment 2. In conclusion, eye-closure seems to involve both general and modality-specific processes. The practical implications of the findings are discussed.

  9. Common coding of auditory and visual spatial information in working memory.

    Science.gov (United States)

    Lehnert, Günther; Zimmer, Hubert D

    2008-09-16

    We compared spatial short-term memory for visual and auditory stimuli in an event-related slow potentials study. Subjects encoded object locations of either four or six sequentially presented auditory or visual stimuli and maintained them during a retention period of 6 s. Slow potentials recorded during encoding were modulated by the modality of the stimuli. Stimulus related activity was stronger for auditory items at frontal and for visual items at posterior sites. At frontal electrodes, negative potentials incrementally increased with the sequential presentation of visual items, whereas a strong transient component occurred during encoding of each auditory item without the cumulative increment. During maintenance, frontal slow potentials were affected by modality and memory load according to task difficulty. In contrast, at posterior recording sites, slow potential activity was only modulated by memory load independent of modality. We interpret the frontal effects as correlates of different encoding strategies and the posterior effects as a correlate of common coding of visual and auditory object locations.

  10. Top-down modulation of the auditory steady-state response in a task-switch paradigm

    Directory of Open Access Journals (Sweden)

    Nadia Müller

    2009-02-01

    Full Text Available Auditory selective attention is an important mechanism for top-down selection of the vast amount of auditory information our perceptual system is exposed to. In the present study, the impact of attention on auditory steady-state responses - previously shown to be generated in primary auditory regions - was investigated. This issue is still a matter of debate and recent findings point to a complex pattern of attentional effects on the aSSR. The present study aimed at shedding light on the involvement of ipsilateral and contralateral activations to the attended sound taking into account hemispheric differences and a possible dependency on modulation frequency. In aid of this, a dichotic listening experiment was designed using amplitude-modulated tones that were presented to the left and right ear simultaneously. Participants had to detect target tones in a cued ear while their brain activity was assessed using MEG. Thereby, a modulation of the aSSR by attention could be revealed, interestingly restricted to the left hemisphere and 20 Hz responses: Contralateral activations were enhanced while ipsilateral activations turned out to be reduced. Thus, our findings support and extend recent findings, showing that auditory attention can influence the aSSR, but only under specific circumstances and in a complex pattern regarding the different effects for ipsilateral and contralateral activations.

  11. Independent component analysis for cochlear implant artifacts attenuation from electrically evoked auditory steady-state response measurements

    Science.gov (United States)

    Deprez, Hanne; Gransier, Robin; Hofmann, Michael; van Wieringen, Astrid; Wouters, Jan; Moonen, Marc

    2018-02-01

    Objective. Electrically evoked auditory steady-state responses (EASSRs) are potentially useful for objective cochlear implant (CI) fitting and follow-up of the auditory maturation in infants and children with a CI. EASSRs are recorded in the electro-encephalogram (EEG) in response to electrical stimulation with continuous pulse trains, and are distorted by significant CI artifacts related to this electrical stimulation. The aim of this study is to evaluate a CI artifacts attenuation method based on independent component analysis (ICA) for three EASSR datasets. Approach. ICA has often been used to remove CI artifacts from the EEG to record transient auditory responses, such as cortical evoked auditory potentials. Independent components (ICs) corresponding to CI artifacts are then often manually identified. In this study, an ICA based CI artifacts attenuation method was developed and evaluated for EASSR measurements with varying CI artifacts and EASSR characteristics. Artifactual ICs were automatically identified based on their spectrum. Main results. For 40 Hz amplitude modulation (AM) stimulation at comfort level, in high SNR recordings, ICA succeeded in removing CI artifacts from all recording channels, without distorting the EASSR. For lower SNR recordings, with 40 Hz AM stimulation at lower levels, or 90 Hz AM stimulation, ICA either distorted the EASSR or could not remove all CI artifacts in most subjects, except for two of the seven subjects tested with low level 40 Hz AM stimulation. Noise levels were reduced after ICA was applied, and up to 29 ICs were rejected, suggesting poor ICA separation quality. Significance. We hypothesize that ICA is capable of separating CI artifacts and EASSR in case the contralateral hemisphere is EASSR dominated. For small EASSRs or large CI artifact amplitudes, ICA separation quality is insufficient to ensure complete CI artifacts attenuation without EASSR distortion.

  12. Subclinical alexithymia modulates early audio-visual perceptive and attentional event-related potentials

    Directory of Open Access Journals (Sweden)

    Dyna eDelle-Vigne

    2014-03-01

    Full Text Available Introduction:Previous studies have highlighted the advantage of audio–visual oddball tasks (instead of unimodal ones in order to electrophysiologically index subclinical behavioral differences. Since alexithymia is highly prevalent in the general population, we investigated whether the use of various bimodal tasks could elicit emotional effects in low- versus high-alexithymic scorers. Methods:Fifty students (33 females were split into groups based on low and high scores on the Toronto Alexithymia Scale. During event-related potential recordings, they were exposed to three kinds of audio–visual oddball tasks: neutral (geometrical forms and bips, animal (dog and cock with their respective shouts, or emotional (faces and voices stimuli. In each condition, participants were asked to quickly detect deviant events occurring amongst a train of frequent matching stimuli (e.g., push a button when a sad face–voice pair appeared amongst a train of neutral face–voice pairs. P100, N100, and P300 components were analyzed: P100 refers to visual perceptive processing, N100 to auditory ones, and the P300 relates to response-related stages. Results:High-alexithymic scorers presented a particular pattern of results when processing the emotional stimulations, reflected in early ERP components by increased P100 and N100 amplitudes in the emotional oddball tasks (P100: pConclusions:Our findings suggest that high-alexithymic scorers require heightened early attentional resources when confronted with emotional stimuli.

  13. Age-dependent impairment of auditory processing under spatially focused and divided attention: an electrophysiological study.

    Science.gov (United States)

    Wild-Wall, Nele; Falkenstein, Michael

    2010-01-01

    By using event-related potentials (ERPs) the present study examines if age-related differences in preparation and processing especially emerge during divided attention. Binaurally presented auditory cues called for focused (valid and invalid) or divided attention to one or both ears. Responses were required to subsequent monaurally presented valid targets (vowels), but had to be suppressed to non-target vowels or invalidly cued vowels. Middle-aged participants were more impaired under divided attention than young ones, likely due to an age-related decline in preparatory attention following cues as was reflected in a decreased CNV. Under divided attention, target processing was increased in the middle-aged, likely reflecting compensatory effort to fulfill task requirements in the difficult condition. Additionally, middle-aged participants processed invalidly cued stimuli more intensely as was reflected by stimulus ERPs. The results suggest an age-related impairment in attentional preparation after auditory cues especially under divided attention and latent difficulties to suppress irrelevant information.

  14. Neural plasticity expressed in central auditory structures with and without tinnitus

    Directory of Open Access Journals (Sweden)

    Larry E Roberts

    2012-05-01

    Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.

  15. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    Science.gov (United States)

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  17. Effects of background noise on inter-trial phase coherence and auditory N1-P2 responses to speech stimuli.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2015-10-01

    This study investigated the effects of a speech-babble background noise on inter-trial phase coherence (ITPC, also referred to as phase locking value (PLV)) and auditory event-related responses (AERP) to speech sounds. Specifically, we analyzed EEG data from 11 normal hearing subjects to examine whether ITPC can predict noise-induced variations in the obligatory N1-P2 complex response. N1-P2 amplitude and latency data were obtained for the /bu/syllable in quiet and noise listening conditions. ITPC data in delta, theta, and alpha frequency bands were calculated for the N1-P2 responses in the two passive listening conditions. Consistent with previous studies, background noise produced significant amplitude reduction and latency increase in N1 and P2, which were accompanied by significant ITPC decreases in all the three frequency bands. Correlation analyses further revealed that variations in ITPC were able to predict the amplitude and latency variations in N1-P2. The results suggest that trial-by-trial analysis of cortical neural synchrony is a valuable tool in understanding the modulatory effects of background noise on AERP measures. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Biological impact of music and software-based auditory training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based training can improve these biological signals. These findings of biological plasticity, in a variety of subject populations, relate to attention and auditory memory, and represent an integrated auditory system influenced by both sensation and cognition. Learning outcomes The reader will (1) understand that the auditory system is malleable to experience and training, (2) learn the ingredients necessary for auditory learning to successfully be applied to communication, (3) learn that the auditory brainstem response to complex sounds (cABR) is a window into the integrated auditory system, and (4) see examples of how cABR can be used to track the outcome of experience and training. PMID:22789822

  19. Hypnotizability and Placebo Analgesia in Waking and Hypnosis as Modulators of Auditory Startle Responses in Healthy Women: An ERP Study.

    Science.gov (United States)

    De Pascalis, Vilfredo; Scacchia, Paolo

    2016-01-01

    We evaluated the influence of hypnotizability, pain expectation, placebo analgesia in waking and hypnosis on tonic pain relief. We also investigated how placebo analgesia affects somatic responses (eye blink) and N100 and P200 waves of event-related potentials (ERPs) elicited by auditory startle probes. Although expectation plays an important role in placebo and hypnotic analgesia, the neural mechanisms underlying these treatments are still poorly understood. We used the cold cup test (CCT) to induce tonic pain in 53 healthy women. Placebo analgesia was initially produced by manipulation, in which the intensity of pain induced by the CCT was surreptitiously reduced after the administration of a sham analgesic cream. Participants were then tested in waking and hypnosis under three treatments: (1) resting (Baseline); (2) CCT-alone (Pain); and (3) CCT plus placebo cream for pain relief (Placebo). For each painful treatment, we assessed pain and distress ratings, eye blink responses, N100 and P200 amplitudes. We used LORETA analysis of N100 and P200 waves, as elicited by auditory startle, to identify cortical regions sensitive to pain reduction through placebo and hypnotic analgesia. Higher pain expectation was associated with higher pain reductions. In highly hypnotizable participants placebo treatment produced significant reductions of pain and distress perception in both waking and hypnosis condition. P200 wave, during placebo analgesia, was larger in the frontal left hemisphere while placebo analgesia, during hypnosis, involved the activity of the left hemisphere including the occipital region. These findings demonstrate that hypnosis and placebo analgesia are different processes of top-down regulation. Pain reduction was associated with larger EMG startle amplitudes, N100 and P200 responses, and enhanced activity within the frontal, parietal, and anterior and posterior cingulate gyres. LORETA results showed that placebo analgesia modulated pain-responsive areas

  20. Natural stimuli improve auditory BCIs with respect to ergonomics and performance

    Science.gov (United States)

    Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael

    2012-08-01

    Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.

  1. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    Science.gov (United States)

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere

  2. A comparison of anesthetic agents and their effects on the response properties of the peripheral auditory system.

    Science.gov (United States)

    Dodd, F; Capranica, R R

    1992-10-01

    Anesthetic agents were compared in order to identify the most appropriate agent for use during surgery and electrophysiological recordings in the auditory system of the tokay gecko (Gekko gecko). Each agent was first screened for anesthetic and analgesic properties and, if found satisfactory, it was subsequently tested in electrophysiological recordings in the auditory nerve. The following anesthetic agents fulfilled our criteria and were selected for further screening: sodium pentobarbital (60 mg/kg); sodium pentobarbital (30 mg/kg) and oxymorphone (1 mg/kg); 3.2% isoflurane; ketamine (440 mg/kg) and oxymorphone (1 mg/kg). These agents were subsequently compared on the basis of their effect on standard response properties of auditory nerve fibers. Our results verified that different anesthetic agents can have significant effects on most of the parameters commonly used in describing the basic response properties of the auditory system in vertebrates. We therefore conclude from this study that the selection of an appropriate experimental protocol is critical and must take into consideration the effects of anesthesia on auditory responsiveness. In the tokay gecko, we recommend 3.2% isoflurane for general surgical procedures; and for electrophysiological recordings in the eighth nerve we recommend barbiturate anesthesia of appropriate dosage in combination if possible with an opioid agent to provide additional analgesic action.

  3. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  4. Response properties of neurons in the cat's putamen during auditory discrimination.

    Science.gov (United States)

    Zhao, Zhenling; Sato, Yu; Qin, Ling

    2015-10-01

    The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Validation of auditory detection response task method for assessing the attentional effects of cognitive load.

    Science.gov (United States)

    Stojmenova, Kristina; Sodnik, Jaka

    2018-07-04

    There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.

  6. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  7. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  8. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  9. Using EEG and stimulus context to probe the modelling of auditory-visual speech.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2016-02-01

    We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  10. Evaluation of peripheral auditory pathways and brainstem in obstructive sleep apnea

    Directory of Open Access Journals (Sweden)

    Erika Matsumura

    Full Text Available Abstract Introduction Obstructive sleep apnea causes changes in normal sleep architecture, fragmenting it chronically with intermittent hypoxia, leading to serious health consequences in the long term. It is believed that the occurrence of respiratory events during sleep, such as apnea and hypopnea, can impair the transmission of nerve impulses along the auditory pathway that are highly dependent on the supply of oxygen. However, this association is not well established in the literature. Objective To compare the evaluation of peripheral auditory pathway and brainstem among individuals with and without obstructive sleep apnea. Methods The sample consisted of 38 adult males, mean age of 35.8 (±7.2, divided into four groups matched for age and Body Mass Index. The groups were classified based on polysomnography in: control (n = 10, mild obstructive sleep apnea (n = 11 moderate obstructive sleep apnea (n = 8 and severe obstructive sleep apnea (n = 9. All study subjects denied a history of risk for hearing loss and underwent audiometry, tympanometry, acoustic reflex and Brainstem Auditory Evoked Response. Statistical analyses were performed using three-factor ANOVA, 2-factor ANOVA, chi-square test, and Fisher's exact test. The significance level for all tests was 5%. Results There was no difference between the groups for hearing thresholds, tympanometry and evaluated Brainstem Auditory Evoked Response parameters. An association was observed between the presence of obstructive sleep apnea and changes in absolute latency of wave V (p = 0.03. There was an association between moderate obstructive sleep apnea and change of the latency of wave V (p = 0.01. Conclusion The presence of obstructive sleep apnea is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem. The increase in obstructive sleep apnea severity does not promote worsening of responses assessed by audiometry, tympanometry and Brainstem

  11. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  12. Comparison between chloral hydrate and propofol-ketamine as sedation regimens for pediatric auditory brainstem response testing.

    Science.gov (United States)

    Abulebda, Kamal; Patel, Vinit J; Ahmed, Sheikh S; Tori, Alvaro J; Lutfi, Riad; Abu-Sultaneh, Samer

    2017-10-28

    The use of diagnostic auditory brainstem response testing under sedation is currently the "gold standard" in infants and young children who are not developmentally capable of completing the test. The aim of the study is to compare a propofol-ketamine regimen to an oral chloral hydrate regimen for sedating children undergoing auditory brainstem response testing. Patients between 4 months and 6 years who required sedation for auditory brainstem response testing were included in this retrospective study. Drugs doses, adverse effects, sedation times, and the effectiveness of the sedative regimens were reviewed. 73 patients underwent oral chloral hydrate sedation, while 117 received propofol-ketamine sedation. 12% of the patients in the chloral hydrate group failed to achieve desired sedation level. The average procedure, recovery and total nursing times were significantly lower in the propofol-ketamine group. Propofol-ketamine group experienced higher incidence of transient hypoxemia. Both sedation regimens can be successfully used for sedating children undergoing auditory brainstem response testing. While deep sedation using propofol-ketamine regimen offers more efficiency than moderate sedation using chloral hydrate, it does carry a higher incidence of transient hypoxemia, which warrants the use of a highly skilled team trained in pediatric cardio-respiratory monitoring and airway management. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. P300 component of event-related potentials in persons with asperger disorder.

    Science.gov (United States)

    Iwanami, Akira; Okajima, Yuka; Ota, Haruhisa; Tani, Masayuki; Yamada, Takashi; Yamagata, Bun; Hashimoto, Ryuichiro; Kanai, Chieko; Takashio, Osamu; Inamoto, Atsuko; Ono, Taisei; Takayama, Yukiko; Kato, Nobumasa

    2014-10-01

    In the present study, we investigated auditory event-related potentials in adults with Asperger disorder and normal controls using an auditory oddball task and a novelty oddball task. Task performance and the latencies of P300 evoked by both target and novel stimuli in the two tasks did not differ between the two groups. Analysis of variance revealed that there was a significant interaction effect between group and electrode site on the mean amplitude of the P300 evoked by novel stimuli, which indicated that there was an altered distribution of the P300 in persons with Asperger disorder. In contrast, there was no significant interaction effect on the mean P300 amplitude elicited by target stimuli. Considering that P300 comprises two main subcomponents, frontal-central-dominant P3a and parietal-dominant P3b, our results suggested that persons with Asperger disorder have enhanced amplitude of P3a, which indicated activated prefrontal function in this task.

  14. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    OpenAIRE

    Frtusova, Jana B.; Phillips, Natalie A.

    2016-01-01

    This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between pe...

  15. Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.

    Science.gov (United States)

    De Vos, Maarten; Gandras, Katharina; Debener, Stefan

    2014-01-01

    In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (pbrain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.

  16. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  17. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  18. Spiking in auditory cortex following thalamic stimulation is dominated by cortical network activity

    Science.gov (United States)

    Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.

    2014-01-01

    The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory

  19. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder.

    Science.gov (United States)

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-03-01

    This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  20. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan

    2015-09-01

    Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.

  1. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  2. A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.

    Science.gov (United States)

    von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H

    2016-10-26

    The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability

  3. Event-related potential responses to perceptual reversals are modulated by working memory load.

    Science.gov (United States)

    Intaitė, Monika; Koivisto, Mika; Castelo-Branco, Miguel

    2014-04-01

    While viewing ambiguous figures, such as the Necker cube, the available perceptual interpretations alternate with one another. The role of higher level mechanisms in such reversals remains unclear. We tested whether perceptual reversals of discontinuously presented Necker cube pairs depend on working memory resources by manipulating cognitive load while recording event-related potentials (ERPs). The ERPs showed early enhancements of negativity, which were obtained in response to the first cube approximately 500 ms before perceived reversals. We found that working memory load influenced reversal-related brain responses in response to the second cube over occipital areas at the 150-300 ms post-stimulus and over central areas at P3 time window (300-500 ms), suggesting that it modulates intermediate visual processes. Interestingly, reversal rates remained unchanged by the working memory load. We propose that perceptual reversals in discontinuous presentation of ambiguous stimuli are governed by an early (well preceding pending reversals) mechanism, while the effects of load on the reversal related ERPs may reflect general top-down influences on visual processing, possibly mediated by the prefrontal cortex. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Missing and delayed auditory responses in young and older children with autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    J. Christopher eEdgar

    2014-06-01

    Full Text Available Background: The development of left and right superior temporal gyrus (STG 50ms (M50 and 100ms (M100 auditory responses in typically developing children (TD and in children with autism spectrum disorder (ASD was examined. It was hypothesized that (1 M50 responses would be observed equally often in younger and older children, (2 M100 responses would be observed more often in older than younger children indicating later development of secondary auditory areas, and (3 M100 but not M50 would be observed less often in ASD than TD in both age groups, reflecting slower maturation of later developing auditory areas in ASD. Methods: 35 typically developing controls, 63 ASD without language impairment (ASD-LI, and 38 ASD with language impairment (ASD+LI were recruited.The presence or absence of a STG M50 and M100 was scored. Subjects were grouped into younger (6 to 10-years-old and older groups (11 to 15-years-old. Results: Although M50 responses were observed equally often in older and younger subjects and equally often in TD and ASD, left and right M50 responses were delayed in ASD-LI and ASD+LI. Group comparisons showed that in younger subjects M100 responses were observed more often in TD than ASD+LI (90% vs 66%, p=0.04, with no differences between TD and ASD-LI (90% vs 76% p=0.14 or between ASD-LI and ASD+LI (76% vs 66%, p=0.53. In older subjects, whereas no differences were observed between TD and ASD+LI, responses were observed more often in ASD-LI than ASD+LI. Conclusions: Although present in all groups, M50 responses were delayed in ASD, suggesting delayed development of earlier developing auditory areas. Examining the TD data, findings indicated that by 11 years a right M100 should be observed in 100% of subjects and a left M100 in 80% of subjects. Thus, by 11years, lack of a left and especially right M100 offers neurobiological insight into sensory processing that may underlie language or cognitive impairment.

  5. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    Science.gov (United States)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  6. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  7. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  8. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.

  9. Towards an optimal paradigm for simultaneously recording cortical and brainstem auditory evoked potentials.

    Science.gov (United States)

    Bidelman, Gavin M

    2015-02-15

    Simultaneous recording of brainstem and cortical event-related brain potentials (ERPs) may offer a valuable tool for understanding the early neural transcription of behaviorally relevant sounds and the hierarchy of signal processing operating at multiple levels of the auditory system. To date, dual recordings have been challenged by technological and physiological limitations including different optimal parameters necessary to elicit each class of ERP (e.g., differential adaptation/habitation effects and number of trials to obtain adequate response signal-to-noise ratio). We investigated a new stimulus paradigm for concurrent recording of the auditory brainstem frequency-following response (FFR) and cortical ERPs. The paradigm is "optimal" in that it uses a clustered stimulus presentation and variable interstimulus interval (ISI) to (i) achieve the most ideal acquisition parameters for eliciting subcortical and cortical responses, (ii) obtain an adequate number of trials to detect each class of response, and (iii) minimize neural adaptation/habituation effects. Comparison between clustered and traditional (fixed, slow ISI) stimulus paradigms revealed minimal change in amplitude or latencies of either the brainstem FFR or cortical ERP. The clustered paradigm offered over a 3× increase in recording efficiency compared to conventional (fixed ISI presentation) and thus, a more rapid protocol for obtaining dual brainstem-cortical recordings in individual listeners. We infer that faster recording of subcortical and cortical potentials might allow more complete and sensitive testing of neurophysiological function and aid in the differential assessment of auditory function. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  11. Modeling auditory-nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    2014-01-01

    μs, which is large enough to affect the temporal coding of sounds and hence, potentially, the communication abilities of the CI listener. In the present study, two recently proposed models of electric stimulation of the AN [1,2] were considered in terms of their efficacy to predict the spike timing...... for anodic and cathodic stimulation of the AN of cat [3]. The models’ responses to the electrical pulses of various shapes [4,5,6] were also analyzed. It was found that, while the models can account for the firing rates in response to various biphasic pulse shapes, they fail to correctly describe the timing......Cochlear implants (CI) directly stimulate the auditory nerve (AN), bypassing the mechano-electrical transduction in the inner ear. Trains of biphasic, charge balanced pulses (anodic and cathodic) are used as stimuli to avoid damage of the tissue. The pulses of either polarity are capable...

  12. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    Science.gov (United States)

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  13. Event-related potentials and secondary task performance during simulated driving.

    Science.gov (United States)

    Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L

    2008-01-01

    Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.

  14. Early access to lexical-level phonological representations of Mandarin word-forms : evidence from auditory N1 habituation

    NARCIS (Netherlands)

    Yue, Jinxing; Alter, Kai; Howard, David; Bastiaanse, Roelien

    2017-01-01

    An auditory habituation design was used to investigate whether lexical-level phonological representations in the brain can be rapidly accessed after the onset of a spoken word. We studied the N1 component of the auditory event-related electrical potential, and measured the amplitude decrements of N1

  15. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder

    Directory of Open Access Journals (Sweden)

    Yones Lotfi

    2016-03-01

    Full Text Available Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD. Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth and lower negative correlations in the most lateral reference location (60° azimuth in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  16. Short- and long-term habituation of auditory event-related potentials in the rat [v1; ref status: indexed, http://f1000r.es/1l3

    Directory of Open Access Journals (Sweden)

    Kestutis Gurevicius

    2013-09-01

    Full Text Available An auditory oddball paradigm in humans generates a long-duration cortical negative potential, often referred to as mismatch negativity. Similar negativity has been documented in monkeys and cats, but it is controversial whether mismatch negativity also exists in awake rodents. To this end, we recorded cortical and hippocampal evoked responses in rats during alert immobility under a typical passive oddball paradigm that yields mismatch negativity in humans. The standard stimulus was a 9 kHz tone and the deviant either 7 or 11 kHz tone in the first condition. We found no evidence of a sustained potential shift when comparing evoked responses to standard and deviant stimuli. Instead, we found repetition-induced attenuation of the P60 component of the combined evoked response in the cortex, but not in the hippocampus. The attenuation extended over three days of recording and disappeared after 20 intervening days of rest. Reversal of the standard and deviant tones resulted is a robust enhancement of the N40 component not only in the cortex but also in the hippocampus. Responses to standard and deviant stimuli were affected similarly. Finally, we tested the effect of scopolamine in this paradigm. Scopolamine attenuated cortical N40 and P60 as well as hippocampal P60 components, but had no specific effect on the deviant response. We conclude that in an oddball paradigm the rat demonstrates repetition-induced attenuation of mid-latency responses, which resembles attenuation of the N1-component of human auditory evoked potential, but no mismatch negativity.

  17. Bayesian Modeling of the Dynamics of Phase Modulations and their Application to Auditory Evoked Responses at Different Loudness Scales

    Directory of Open Access Journals (Sweden)

    Zeinab eMortezapouraghdam

    2016-01-01

    Full Text Available We study the effect of long-term habituation signatures of auditory selective attention reflected in the instantaneous phase information of the auditory event-related potentials (ERPs at four distinct stimuli levels of 60dB SPL, 70dB SPL, 80dB SPL and 90dB SPL. The analysis is based on the single-trial level. The effect of habituation can be observed in terms of the changes (jitter in the instantaneous phase information of ERPs. In particular, the absence of habituation is correlated with a consistently high phase synchronization over ERP trials.We estimate the changes in phase concentration over trials using a Bayesian approach, in which the phase is modeled as being drawn from a von Mises distribution with a concentration parameter which varies smoothly over trials. The smoothness assumption reflects the fact that habituation is a gradual process.We differentiate between different stimuli based on the relative changes and absolute values of the estimated concentration parameter using the proposed Bayesian model.

  18. Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children.

    Science.gov (United States)

    Niemitalo-Haapola, Elina; Haapala, Sini; Kujala, Teija; Raappana, Antti; Kujala, Tiia; Jansson-Verkasalo, Eira

    2017-08-16

    The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children. P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years. The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise. Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children. https://doi.org/10.23641/asha.5233939.

  19. Blind Source Separation of Event-Related EEG/MEG.

    Science.gov (United States)

    Metsomaa, Johanna; Sarvas, Jukka; Ilmoniemi, Risto Juhani

    2017-09-01

    Blind source separation (BSS) can be used to decompose complex electroencephalography (EEG) or magnetoencephalography data into simpler components based on statistical assumptions without using a physical model. Applications include brain-computer interfaces, artifact removal, and identifying parallel neural processes. We wish to address the issue of applying BSS to event-related responses, which is challenging because of nonstationary data. We introduce a new BSS approach called momentary-uncorrelated component analysis (MUCA), which is tailored for event-related multitrial data. The method is based on approximate joint diagonalization of multiple covariance matrices estimated from the data at separate latencies. We further show how to extend the methodology for autocovariance matrices and how to apply BSS methods suitable for piecewise stationary data to event-related responses. We compared several BSS approaches by using simulated EEG as well as measured somatosensory and transcranial magnetic stimulation (TMS) evoked EEG. Among the compared methods, MUCA was the most tolerant one to noise, TMS artifacts, and other challenges in the data. With measured somatosensory data, over half of the estimated components were found to be similar by MUCA and independent component analysis. MUCA was also stable when tested with several input datasets. MUCA is based on simple assumptions, and the results suggest that MUCA is robust with nonideal data. Event-related responses and BSS are valuable and popular tools in neuroscience. Correctly designed BSS is an efficient way of identifying artifactual and neural processes from nonstationary event-related data.

  20. Deficient multisensory integration in schizophrenia: an event-related potential study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  1. Cerebral Responses to Vocal Attractiveness and Auditory Hallucinations in Schizophrenia: A Functional MRI Study

    Directory of Open Access Journals (Sweden)

    Michihiko eKoeda

    2013-05-01

    Full Text Available Impaired self-monitoring and abnormalities of cognitive bias have been implicated as cognitive mechanisms of hallucination; regions fundamental to these processes including inferior frontal gyrus (IFG and superior temporal gyrus (STG are abnormally activated in individuals that hallucinate. A recent study showed activation in IFG-STG to be modulated by auditory attractiveness, but no study has investigated whether these IFG-STG activations are impaired in schizophrenia. We aimed to clarify the cerebral function underlying the perception of auditory attractiveness in schizophrenia patients. Cerebral activation was examined in 18 schizophrenia patients and 18 controls when performing Favourability Judgment Task (FJT and Gender Differentiation Task (GDT for pairs of greetings using event-related functional MRI. A full-factorial analysis revealed that the main effect of task was associated with activation of left IFG and STG. The main effect of Group revealed less activation of left STG in schizophrenia compared with controls, whereas significantly greater activation in schizophrenia than in controls was revealed at the left middle frontal gyrus (MFG, right temporo-parietal junction (TPJ, right occipital lobe, and right amygdala (p<0.05, FDR-corrected. A significant positive correlation was observed at the right TPJ and right MFG between cerebral activation under FJT minus GDT contrast and the score of hallucinatory behaviour on the Positive and Negative Symptom Scale. Findings of hypo-activation in the left STG could designate brain dysfunction in accessing vocal attractiveness in schizophrenia, whereas hyper-activation in the right TPJ and MFG may reflect the process of mentalizing other person’s behaviour by auditory hallucination by abnormality of cognitive bias.

  2. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    Science.gov (United States)

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  3. Reevaluation of the Amsterdam Inventory for Auditory Disability and Handicap Using Item Response Theory

    Science.gov (United States)

    Hospers, J. Mirjam Boeschen; Smits, Niels; Smits, Cas; Stam, Mariska; Terwee, Caroline B.; Kramer, Sophia E.

    2016-01-01

    Purpose: We reevaluated the psychometric properties of the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1995) using item response theory. Item response theory describes item functioning along an ability continuum. Method: Cross-sectional data from 2,352 adults with and without hearing…

  4. Discrete Event Simulation of Distributed Team Communication

    Science.gov (United States)

    2012-03-22

    performs, and auditory information that is provided through multiple audio devices with speech response. This paper extends previous discrete event workload...2008, pg. 1) notes that “Architecture modeling furnishes abstrac- tions for use in managing complexities, allowing engineers to visualise the proposed

  5. Noise Stress Induces an Epidermal Growth Factor Receptor/Xeroderma Pigmentosum-A Response in the Auditory Nerve.

    Science.gov (United States)

    Guthrie, O'neil W

    2017-03-01

    In response to toxic stressors, cancer cells defend themselves by mobilizing one or more epidermal growth factor receptor (EGFR) cascades that employ xeroderma pigmentosum-A (XPA) to repair damaged genes. Recent experiments discovered that neurons within the auditory nerve exhibit basal levels of EGFR+XPA co-expression. This finding implied that auditory neurons in particular or neurons in general have the capacity to mobilize an EGFR+XPA defense. Therefore, the current study tested the hypothesis that noise stress would alter the expression pattern of EGFR/XPA within the auditory nerve. Design-based stereology was used to quantify the proportion of neurons that expressed EGFR, XPA, and EGFR+XPA with and without noise stress. The results revealed an intricate neuronal response that is suggestive of alterations to both co-expression and individual expression of EGFR and XPA. In both the apical and middle cochlear coils, the noise stress depleted EGFR+XPA expression. Furthermore, there was a reduction in the proportion of neurons that expressed XPA-alone in the middle coils. However, the noise stress caused a significant increase in the proportion of neurons that expressed EGFR-alone in the middle coils. The basal cochlear coils failed to mobilize a significant response to the noise stress. These results suggest that EGFR and XPA might be part of the molecular defense repertoire of the auditory nerve.

  6. Noise Stress Induces an Epidermal Growth Factor Receptor/Xeroderma Pigmentosum–A Response in the Auditory Nerve

    Science.gov (United States)

    Guthrie, O’neil W.

    2017-01-01

    In response to toxic stressors, cancer cells defend themselves by mobilizing one or more epidermal growth factor receptor (EGFR) cascades that employ xeroderma pigmentosum–A (XPA) to repair damaged genes. Recent experiments discovered that neurons within the auditory nerve exhibit basal levels of EGFR+XPA co-expression. This finding implied that auditory neurons in particular or neurons in general have the capacity to mobilize an EGFR+XPA defense. Therefore, the current study tested the hypothesis that noise stress would alter the expression pattern of EGFR/XPA within the auditory nerve. Design-based stereology was used to quantify the proportion of neurons that expressed EGFR, XPA, and EGFR+XPA with and without noise stress. The results revealed an intricate neuronal response that is suggestive of alterations to both co-expression and individual expression of EGFR and XPA. In both the apical and middle cochlear coils, the noise stress depleted EGFR+XPA expression. Furthermore, there was a reduction in the proportion of neurons that expressed XPA-alone in the middle coils. However, the noise stress caused a significant increase in the proportion of neurons that expressed EGFR-alone in the middle coils. The basal cochlear coils failed to mobilize a significant response to the noise stress. These results suggest that EGFR and XPA might be part of the molecular defense repertoire of the auditory nerve. PMID:28056182

  7. EEG phase reset due to auditory attention: an inverse time-scale approach

    International Nuclear Information System (INIS)

    Low, Yin Fen; Strauss, Daniel J

    2009-01-01

    We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6–10 Hz, termed as theta–alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta–alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis

  8. EEG phase reset due to auditory attention: an inverse time-scale approach.

    Science.gov (United States)

    Low, Yin Fen; Strauss, Daniel J

    2009-08-01

    We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6-10 Hz, termed as theta-alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta-alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis.

  9. Phencyclidine Disrupts the Auditory Steady State Response in Rats.

    Directory of Open Access Journals (Sweden)

    Emma Leishman

    Full Text Available The Auditory Steady-State Response (ASSR in the electroencephalogram (EEG is usually reduced in schizophrenia (SZ, particularly to 40 Hz stimulation. The gamma frequency ASSR deficit has been attributed to N-methyl-D-aspartate receptor (NMDAR hypofunction. We tested whether the NMDAR antagonist, phencyclidine (PCP, produced similar ASSR deficits in rats. EEG was recorded from awake rats via intracranial electrodes overlaying the auditory cortex and at the vertex of the skull. ASSRs to click trains were recorded at 10, 20, 30, 40, 50, and 55 Hz and measured by ASSR Mean Power (MP and Phase Locking Factor (PLF. In Experiment 1, the effect of different subcutaneous doses of PCP (1.0, 2.5 and 4.0 mg/kg on the ASSR in 12 rats was assessed. In Experiment 2, ASSRs were compared in PCP treated rats and control rats at baseline, after acute injection (5 mg/kg, following two weeks of subchronic, continuous administration (5 mg/kg/day, and one week after drug cessation. Acute administration of PCP increased PLF and MP at frequencies of stimulation below 50 Hz, and decreased responses at higher frequencies at the auditory cortex site. Acute administration had a less pronounced effect at the vertex site, with a reduction of either PLF or MP observed at frequencies above 20 Hz. Acute effects increased in magnitude with higher doses of PCP. Consistent effects were not observed after subchronic PCP administration. These data indicate that acute administration of PCP, a NMDAR antagonist, produces an increase in ASSR synchrony and power at low frequencies of stimulation and a reduction of high frequency (> 40 Hz ASSR activity in rats. Subchronic, continuous administration of PCP, on the other hand, has little impact on ASSRs. Thus, while ASSRs are highly sensitive to NMDAR antagonists, their translational utility as a cross-species biomarker for NMDAR hypofunction in SZ and other disorders may be dependent on dose and schedule.

  10. Nicotine Receptor Subtype-Specific Effects on Auditory Evoked Oscillations and Potentials

    Science.gov (United States)

    Featherstone, Robert E.; Phillips, Jennifer M.; Thieu, Tony; Ehrlichman, Richard S.; Halene, Tobias B.; Leiser, Steven C.; Christian, Edward; Johnson, Edwin; Lerman, Caryn; Siegel, Steven J.

    2012-01-01

    Background Individuals with schizophrenia show increased smoking rates which may be due to a beneficial effect of nicotine on cognition and information processing. Decreased amplitude of the P50 and N100 auditory event-related potentials (ERPs) is observed in patients. Both measures show normalization following administration of nicotine. Recent studies identified an association between deficits in auditory evoked gamma oscillations and impaired information processing in schizophrenia, and there is evidence that nicotine normalizes gamma oscillations. Although the role of nicotine receptor subtypes in augmentation of ERPs has received some attention, less is known about how these receptor subtypes regulate the effect of nicotine on evoked gamma activity. Methodology/Principal Findings We examined the effects of nicotine, the α7 nicotine receptor antagonist methyllycaconitine (MLA) the α4β4/α4β2 nicotine receptor antagonist dihydro-beta-erythroidine (DHβE), and the α4β2 agonist AZD3480 on P20 and N40 amplitude as well as baseline and event-related gamma oscillations in mice, using electrodes in hippocampal CA3. Nicotine increased P20 amplitude, while DHβE blocked nicotine-induced enhancements in P20 amplitude. Conversely, MLA did not alter P20 amplitude either when presented alone or with nicotine. Administration of the α4β2 specific agonist AZD3480 did not alter any aspect of P20 response, suggesting that DHβE blocks the effects of nicotine through a non-α4β2 receptor specific mechanism. Nicotine and AZD3480 reduced N40 amplitude, which was blocked by both DHβE and MLA. Finally, nicotine significantly increased event-related gamma, as did AZD3480, while DHβE but not MLA blocked the effect of nicotine on event-related gamma. Conclusions/Significance These results support findings showing that nicotine-induced augmentation of P20 amplitude occurs via a DHβE sensitive mechanism, but suggests that this does not occur through activation of α4β2

  11. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  12. Effect of Infant Prematurity on Auditory Brainstem Response at Preschool Age

    Directory of Open Access Journals (Sweden)

    Sara Hasani

    2013-03-01

    Full Text Available Introduction: Preterm birth is a risk factor for a number of conditions that requires comprehensive examination. Our study was designed to investigate the impact of preterm birth on the processing of auditory stimuli and brain structures at the brainstem level at a preschool age.   Materials and Methods: An auditory brainstem response (ABR test was performed with low rates of stimuli in 60 children aged 4 to 6 years. Thirty subjects had been born following a very preterm labor or late-preterm labor and 30 control subjects had been born following a full-term labor.   Results: Significant differences in the ABR test result were observed in terms of the inter-peak intervals of the I–III and III–V waves, and the absolute latency of the III wave (P

  13. Echoic Memory: Investigation of Its Temporal Resolution by Auditory Offset Cortical Responses

    OpenAIRE

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temp...

  14. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  15. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  16. A model of auditory nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    Cochlear implants (CI) stimulate the auditory nerve (AN) with a train of symmetric biphasic current pulses comprising of a cathodic and an anodic phase. The cathodic phase is intended to depolarize the membrane of the neuron and to initiate an action potential (AP) and the anodic phase to neutral......Cochlear implants (CI) stimulate the auditory nerve (AN) with a train of symmetric biphasic current pulses comprising of a cathodic and an anodic phase. The cathodic phase is intended to depolarize the membrane of the neuron and to initiate an action potential (AP) and the anodic phase......-and-fire neuron with two partitions responding individually to anodic and cathodic stimulation. Membrane noise was parameterized based on reported relative spread of AN neurons. Firing efficiency curves and spike-latency distributions were simulated for monophasic and symmetric biphasic stimulation...

  17. A comparison of auditory brainstem responses across diving bird species

    Science.gov (United States)

    Crowell, Sara E.; Berlin, Alicia; Carr, Catherine E.; Olsen, Glenn H.; Therrien, Ronald E.; Yannuzzi, Sally E.; Ketten, Darlene R.

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al., Proc Natl Acad Sci USA 63:676–680, 1969). We, therefore, measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of the greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e., frequency at the greatest intensity, of all species' vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range.

  18. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  19. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  20. Abnormal auditory forward masking pattern in the brainstem response of individuals with Asperger syndrome

    Directory of Open Access Journals (Sweden)

    Johan Källstrand

    2010-05-01

    Full Text Available Johan Källstrand1, Olle Olsson2, Sara Fristedt Nehlstedt1, Mia Ling Sköld1, Sören Nielzén21SensoDetect AB, Lund, Sweden; 2Department of Clinical Neuroscience, Section of Psychiatry, Lund University, Lund, SwedenAbstract: Abnormal auditory information processing has been reported in individuals with autism spectrum disorders (ASD. In the present study auditory processing was investigated by recording auditory brainstem responses (ABRs elicited by forward masking in adults diagnosed with Asperger syndrome (AS. Sixteen AS subjects were included in the forward masking experiment and compared to three control groups consisting of healthy individuals (n = 16, schizophrenic patients (n = 16 and attention deficit hyperactivity disorder patients (n = 16, respectively, of matching age and gender. The results showed that the AS subjects exhibited abnormally low activity in the early part of their ABRs that distinctly separated them from the three control groups. Specifically, wave III amplitudes were significantly lower in the AS group than for all the control groups in the forward masking condition (P < 0.005, which was not the case in the baseline condition. Thus, electrophysiological measurements of ABRs to complex sound stimuli (eg, forward masking may lead to a better understanding of the underlying neurophysiology of AS. Future studies may further point to specific ABR characteristics in AS individuals that separate them from individuals diagnosed with other neurodevelopmental diseases.Keywords: asperger syndrome, auditory brainstem response, forward masking, psychoacoustics

  1. ERP evidence that auditory-visual speech facilitates working memory in younger and older adults.

    Science.gov (United States)

    Frtusova, Jana B; Winneke, Axel H; Phillips, Natalie A

    2013-06-01

    Auditory-visual (AV) speech enhances speech perception and facilitates auditory processing, as measured by event-related brain potentials (ERPs). Considering a perspective of shared resources between perceptual and cognitive processes, facilitated speech perception may render more resources available for higher-order functions. This study examined whether AV speech facilitation leads to better working memory (WM) performance in 23 younger and 20 older adults. Participants completed an n-back task (0- to 3-back) under visual-only (V-only), auditory-only (A-only), and AV conditions. The results showed faster responses across all memory loads and improved accuracy in the most demanding conditions (2- and 3-back) during AV compared with unisensory conditions. Older adults benefited from the AV presentation to the same extent as younger adults. WM performance of older adults during the AV presentation did not differ from that of younger adults in the A-only condition, suggesting that an AV presentation can help to counteract some of the age-related WM decline. The ERPs showed a decrease in the auditory N1 amplitude during the AV compared with A-only presentation in older adults, suggesting that the facilitation of perceptual processing becomes especially beneficial with aging. Additionally, the N1 occurred earlier in the AV than in the A-only condition for both age groups. These AV-induced modulations of auditory processing correlated with improvement in certain behavioral and ERP measures of WM. These results support an integrated model between perception and cognition, and suggest that processing speech under AV conditions enhances WM performance of both younger and older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    Science.gov (United States)

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Evaluation of peripheral auditory pathways and brainstem in obstructive sleep apnea.

    Science.gov (United States)

    Matsumura, Erika; Matas, Carla Gentile; Magliaro, Fernanda Cristina Leite; Pedreño, Raquel Meirelles; Lorenzi-Filho, Geraldo; Sanches, Seisse Gabriela Gandolfi; Carvallo, Renata Mota Mamede

    2016-11-25

    Obstructive sleep apnea causes changes in normal sleep architecture, fragmenting it chronically with intermittent hypoxia, leading to serious health consequences in the long term. It is believed that the occurrence of respiratory events during sleep, such as apnea and hypopnea, can impair the transmission of nerve impulses along the auditory pathway that are highly dependent on the supply of oxygen. However, this association is not well established in the literature. To compare the evaluation of peripheral auditory pathway and brainstem among individuals with and without obstructive sleep apnea. The sample consisted of 38 adult males, mean age of 35.8 (±7.2), divided into four groups matched for age and Body Mass Index. The groups were classified based on polysomnography in: control (n=10), mild obstructive sleep apnea (n=11) moderate obstructive sleep apnea (n=8) and severe obstructive sleep apnea (n=9). All study subjects denied a history of risk for hearing loss and underwent audiometry, tympanometry, acoustic reflex and Brainstem Auditory Evoked Response. Statistical analyses were performed using three-factor ANOVA, 2-factor ANOVA, chi-square test, and Fisher's exact test. The significance level for all tests was 5%. There was no difference between the groups for hearing thresholds, tympanometry and evaluated Brainstem Auditory Evoked Response parameters. An association was observed between the presence of obstructive sleep apnea and changes in absolute latency of wave V (p=0.03). There was an association between moderate obstructive sleep apnea and change of the latency of wave V (p=0.01). The presence of obstructive sleep apnea is associated with changes in nerve conduction of acoustic stimuli in the auditory pathway in the brainstem. The increase in obstructive sleep apnea severity does not promote worsening of responses assessed by audiometry, tympanometry and Brainstem Auditory Evoked Response. Copyright © 2016 Associação Brasileira de

  4. Response properties of neighboring neurons in the auditory midbrain for pure-tone stimulation: a tetrode study.

    Science.gov (United States)

    Seshagiri, Chandran V; Delgutte, Bertrand

    2007-10-01

    The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives.

  5. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Science.gov (United States)

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  6. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  7. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    Science.gov (United States)

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  8. The effects of interstimulus interval on sensory gating and on preattentive auditory memory in the oddball paradigm. Can magnitude of the sensory gating affect preattentive auditory comparison process?

    Science.gov (United States)

    Ermutlu, M Numan; Demiralp, Tamer; Karamürsel, Sacit

    2007-01-22

    P50, and mismatch negativity (MMN) are components of event-related potentials (ERP) reflecting sensory gating and preattentive auditory memory, respectively. Interstimulus interval (ISI) is an important determinant of the amplitudes of these components and N1. In the present study the interrelation between stimulus gating and preattentive auditory sensory memory were investigated as a function of ISI in 1.5, 2.5 and 3.5s in 15 healthy volunteered participants. ISI factor affected the N1 peak amplitude significantly. MMN amplitude in 2.5s ISI was significantly smaller compared to 1.5 and 3.5s ISI. ISI X stimuli interaction on P50 amplitude was statistically significant. P50 amplitudes to deviant stimuli in 2.5s ISI were larger than the P50 amplitudes in other ISIs. P50 difference (P50d) waveform amplitude correlated significantly with MMN amplitude. The results suggest that: (i) auditory sensory gating could affect preattentive auditory sensory memory by supplying input to the comparator mechanism; (ii) 2.5s ISI is important in displaying the sensory gating and preattentive auditory sensory memory relation.

  9. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Directory of Open Access Journals (Sweden)

    Larson Charles R

    2011-06-01

    Full Text Available Abstract Background The motor-driven predictions about expected sensory feedback (efference copies have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs were recorded in response to upward pitch shift stimuli (PSS with five different magnitudes (0, +50, +100, +200 and +400 cents at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents, became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  10. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Larson, Charles R

    2011-06-06

    The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  11. Predicting hearing thresholds and occupational hearing loss with multiple-frequency auditory steady-state responses.

    Science.gov (United States)

    Hsu, Ruey-Fen; Ho, Chi-Kung; Lu, Sheng-Nan; Chen, Shun-Sheng

    2010-10-01

    An objective investigation is needed to verify the existence and severity of hearing impairments resulting from work-related, noise-induced hearing loss in arbitration of medicolegal aspects. We investigated the accuracy of multiple-frequency auditory steady-state responses (Mf-ASSRs) between subjects with sensorineural hearing loss (SNHL) with and without occupational noise exposure. Cross-sectional study. Tertiary referral medical centre. Pure-tone audiometry and Mf-ASSRs were recorded in 88 subjects (34 patients had occupational noise-induced hearing loss [NIHL], 36 patients had SNHL without noise exposure, and 18 volunteers were normal controls). Inter- and intragroup comparisons were made. A predicting equation was derived using multiple linear regression analysis. ASSRs and pure-tone thresholds (PTTs) showed a strong correlation for all subjects (r = .77 ≈ .94). The relationship is demonstrated by the equationThe differences between the ASSR and PTT were significantly higher for the NIHL group than for the subjects with non-noise-induced SNHL (p tool for objectively evaluating hearing thresholds. Predictive value may be lower in subjects with occupational hearing loss. Regardless of carrier frequencies, the severity of hearing loss affects the steady-state response. Moreover, the ASSR may assist in detecting noise-induced injury of the auditory pathway. A multiple linear regression equation to accurately predict thresholds was shown that takes into consideration all effect factors.

  12. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Directory of Open Access Journals (Sweden)

    Jason A Miranda

    Full Text Available Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  13. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Science.gov (United States)

    Miranda, Jason A; Shepard, Kathryn N; McClintock, Shannon K; Liu, Robert C

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  14. Cognitive processing in non-communicative patients: what can event-related potentials tell us?

    Directory of Open Access Journals (Sweden)

    Zulay Rosario Lugo

    2016-11-01

    Full Text Available Event-related potentials (ERP have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS. Eleven chronic LIS patients and ten healthy subjects (HS listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds and then in an active condition (counting the deviant tones. Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and 5 of 7 in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.

  15. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  16. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Using Facebook to Reach People Who Experience Auditory Hallucinations.

    Science.gov (United States)

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-06-14

    Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people

  18. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  19. Amplitude by peak interaction but no evidence of auditory mismatch response deficits to frequency change in preschool age children with FASD.

    Science.gov (United States)

    Kabella, Danielle M; Flynn, Lucinda; Peters, Amanda; Kodituwakku, Piyadasa; Stephen, Julia M

    2018-05-24

    Prior studies indicate that the auditory mismatch response is sensitive to early alterations in brain development in multiple developmental disorders. Prenatal alcohol exposure is known to impact early auditory processing. The current study hypothesized alterations in the mismatch response in young children with FASD. Participants in this study were 9 children with a fetal alcohol spectrum disorder and 17 control children (Control) aged 3 to 6 years. Participants underwent MEG and structural MRI scans separately. We compared groups on neurophysiological Mismatch Negativity (MMN) responses to auditory stimuli measured using the auditory oddball paradigm. Frequent (1000 Hz) and rare (1200Hz) tones were presented at 72 dB. There was no significant group difference in MMN response latency or amplitude represented by the peak located ~200 ms after stimulus presentation in the difference timecourse between frequent and infrequent tones. Examining the timecourses to the frequent and infrequent tones separately, RM-ANOVA with condition (frequent vs. rare), peak (N100m and N200m), and hemisphere as within-subject factors and diagnosis and sex as the between-subject factors showed a significant interaction of peak by diagnosis (p = 0.001), with a pattern of decreased amplitude from N100m to N200m in Control children and the opposite pattern in children with FASD. However, no significant difference was found with the simple effects comparisons. No group differences were found in the response latencies of the rare auditory evoked fields (AEFs). The results indicate that there was no detectable effect of alcohol exposure on the amplitude or latency of the MMNm response to simple tones modulated by frequency change in preschool-age children with FASD. However, while discrimination abilities to simple tones may be intact, early auditory sensory processing revealed by the interaction between N100m and N200m amplitude indicates that auditory sensory processing may be altered in

  20. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  1. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  2. Auditory-steady-state response reliability in the audiological diagnosis after neonatal hearing screening.

    Science.gov (United States)

    Núñez-Batalla, Faustino; Noriega-Iglesias, Sabel; Guntín-García, Maite; Carro-Fernández, Pilar; Llorente-Pendás, José Luis

    2016-01-01

    Conventional audiometry is the gold standard for quantifying and describing hearing loss. Alternative methods become necessary to assess subjects who are too young to respond reliably. Auditory evoked potentials constitute the most widely used method for determining hearing thresholds objectively; however, this stimulus is not frequency specific. The advent of the auditory steady-state response (ASSR) leads to more specific threshold determination. The current study describes and compares ASSR, auditory brainstem response (ABR) and conventional behavioural tone audiometry thresholds in a group of infants with various degrees of hearing loss. A comparison was made between ASSR, ABR and behavioural hearing thresholds in 35 infants detected in the neonatal hearing screening program. Mean difference scores (±SD) between ABR and high frequency ABR thresholds were 11.2 dB (±13) and 10.2 dB (±11). Pearson correlations between the ASSR and audiometry thresholds were 0.80 and 0.91 (500Hz); 0.84 and 0.82 (1000Hz); 0.85 and 0.84 (2000Hz); and 0.83 and 0.82 (4000Hz). The ASSR technique is a valuable extension of the clinical test battery for hearing-impaired children. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  3. Brainstem auditory evoked responses in an equine patient population: part I--adult horses.

    Science.gov (United States)

    Aleman, M; Holliday, T A; Nieto, J E; Williams, D C

    2014-01-01

    Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.

  4. Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.

    Science.gov (United States)

    Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine

    2014-01-01

    To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Effects of ZNF804A on auditory P300 response in schizophrenia.

    LENUS (Irish Health Repository)

    O'Donoghue, T

    2014-01-01

    The common variant rs1344706 within the zinc-finger protein gene ZNF804A has been strongly implicated in schizophrenia (SZ) susceptibility by a series of recent genetic association studies. Although associated with a pattern of altered neural connectivity, evidence that increased risk is mediated by an effect on cognitive deficits associated with the disorder has been equivocal. This study investigated whether the same ZNF804A risk allele was associated with variation in the P300 auditory-evoked response, a cognitively relevant putative endophenotype for SZ. We compared P300 responses in carriers and noncarriers of the ZNF804A risk allele genotype groups in Irish patients and controls (n=97). P300 response was observed to vary according to genotype in this sample, such that risk allele carriers showed relatively higher P300 response compared with noncarriers. This finding accords with behavioural data reported by our group and others. It is also consistent with the idea that ZNF804A may have an impact on cortical efficiency, reflected in the higher levels of activations required to achieve comparable behavioural accuracy on the task used.

  6. The effects of interstimulus interval on event-related indices of attention: an auditory selective attention test of perceptual load theory.

    Science.gov (United States)

    Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter

    2008-03-01

    We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.

  7. Sub-threshold cross-modal sensory interaction in the thalamus: lemniscal auditory response in the medial geniculate nucleus is modulated by somatosensory stimulation.

    Science.gov (United States)

    Donishi, T; Kimura, A; Imbe, H; Yokoi, I; Kaneoke, Y

    2011-02-03

    Recent studies have highlighted cross-modal sensory modulations in the primary sensory areas in the cortex, suggesting that cross-modal sensory interactions occur at early stages in the hierarchy of sensory processing. Multi-modal sensory inputs from non-lemniscal thalamic nuclei and cortical inputs from the secondary sensory and association areas are considered responsible for the modulations. On the other hand, there is little evidence of cross-sensory modal sensitivities in lemniscal thalamic nuclei. In the present study, we were interested in a possibility that somatosensory stimulation may affect auditory response in the ventral division (MGV) of the medial geniculate nucleus (MG), a lemniscal thalamic nucleus that is considered to be dedicated to auditory uni-modal processing. Experiments were performed on anesthetized rats. Transcutaneous electrical stimulation of the hindpaw, which is thought to evoke nociception and seems unrelated to auditory processing, modulated unit discharges in response to auditory stimulation (noise bursts). The modulation was observed in the MGV and non-lemniscal auditory thalamic nuclei such as the dorsal and medial divisions of the MG. The major effect of somatosensory stimulation was suppression. The most robust suppression was induced by electrical stimuli given simultaneously with noise bursts or preceding noise bursts by 10 to 20 ms. The results indicate that the lemniscal (MGV) and non-lemniscal auditory nuclei are subject to somatosensory influence. In everyday experience intense somatosensory stimuli such as pain interrupt our ongoing hearing or interfere with clear recognition of sound. The modulation of lemniscal auditory response by somatosensory stimulation may underlie such cross-modal disturbance of auditory perception as a form of cross-modal switching of attention. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Probing the lifetimes of auditory novelty detection processes.

    Science.gov (United States)

    Pegado, Felipe; Bekinschtein, Tristan; Chausson, Nicolas; Dehaene, Stanislas; Cohen, Laurent; Naccache, Lionel

    2010-08-01

    Auditory novelty detection can be fractionated into multiple cognitive processes associated with their respective neurophysiological signatures. In the present study we used high-density scalp event-related potentials (ERPs) during an active version of the auditory oddball paradigm to explore the lifetimes of these processes by varying the stimulus onset asynchrony (SOA). We observed that early MMN (90-160 ms) decreased when the SOA increased, confirming the evanescence of this echoic memory system. Subsequent neural events including late MMN (160-220 ms) and P3a/P3b components of the P3 complex (240-500 ms) did not decay with SOA, but showed a systematic delay effect supporting a two-stage model of accumulation of evidence. On the basis of these observations, we propose a distinction within the MMN complex of two distinct events: (1) an early, pre-attentive and fast-decaying MMN associated with generators located within superior temporal gyri (STG) and frontal cortex, and (2) a late MMN more resistant to SOA, corresponding to the activation of a distributed cortical network including fronto-parietal regions. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. Auditory enhancement of visual memory encoding is driven by emotional content of the auditory material and mediated by superior frontal cortex.

    Science.gov (United States)

    Proverbio, A M; De Benedetto, F

    2018-02-01

    The aim of the present study was to investigate how auditory background interacts with learning and memory. Both facilitatory (e.g., "Mozart effect") and interfering effects of background have been reported, depending on the type of auditory stimulation and of concurrent cognitive tasks. Here we recorded event related potentials (ERPs) during face encoding followed by an old/new memory test to investigate the effect of listening to classical music (Čajkovskij, dramatic), environmental sounds (rain) or silence on learning. Participants were 15 healthy non-musician university students. Almost 400 (previously unknown) faces of women and men of various age were presented. Listening to music during study led to a better encoding of faces as indexed by an increased Anterior Negativity. The FN400 response recorded during the memory test showed a gradient in its amplitude reflecting face familiarity. FN400 was larger to new than old faces, and to faces studied during rain sound listening and silence than music listening. The results indicate that listening to music enhances memory recollection of faces by merging with visual information. A swLORETA analysis showed the main involvement of Superior Temporal Gyrus (STG) and medial frontal gyrus in the integration of audio-visual information. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Event-related oscillations (EROs) and event-related potentials (ERPs) comparison in facial expression recognition.

    Science.gov (United States)

    Balconi, Michela; Pozzoli, Uberto

    2007-09-01

    The study aims to explore the significance of event-related potentials (ERPs) and event-related brain oscillations (EROs) (delta, theta, alpha, beta, gamma power) in response to emotional (fear, happiness, sadness) when compared with neutral faces during 180-250 post-stimulus time interval. The ERP results demonstrated that the emotional face elicited a negative peak at approximately 230 ms (N2). Moreover, EEG measures showed that motivational significance of face (emotional vs. neutral) could modulate the amplitude of EROs, but only for some frequency bands (i.e. theta and gamma bands). In a second phase, we considered the resemblance of the two EEG measures by a regression analysis. It revealed that theta and gamma oscillations mainly effect as oscillation activity at the N2 latency. Finally, a posterior increased power of theta was found for emotional faces.

  12. Maturation of auditory neural processes in autism spectrum disorder — A longitudinal MEG study

    Directory of Open Access Journals (Sweden)

    Russell G. Port

    2016-01-01

    Conclusions: Children with ASD showed perturbed auditory cortex neural activity, as evidenced by M100 latency delays as well as reduced transient gamma-band activity. Despite evidence for maturation of these responses in ASD, the neural abnormalities in ASD persisted across time. Of note, data from the five children whom demonstrated “optimal outcome” qualitatively suggest that such clinical improvements may be associated with auditory brain responses intermediate between TD and ASD. These “optimal outcome” related results are not statistically significant though, likely due to the low sample size of this cohort, and to be expected as a result of the relatively low proportion of “optimal outcome” in the ASD population. Thus, further investigations with larger cohorts are needed to determine if the above auditory response phenotypes have prognostic utility, predictive of clinical outcome.

  13. Automatic hearing loss detection system based on auditory brainstem response

    International Nuclear Information System (INIS)

    Aldonate, J; Mercuri, C; Reta, J; Biurrun, J; Bonell, C; Gentiletti, G; Escobar, S; Acevedo, R

    2007-01-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory

  14. Motor-related signals in the auditory system for listening and learning.

    Science.gov (United States)

    Schneider, David M; Mooney, Richard

    2015-08-01

    In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    Directory of Open Access Journals (Sweden)

    Jana B. Frtusova

    2016-04-01

    Full Text Available This study examined the effect of auditory-visual (AV speech stimuli on working memory in hearing impaired participants (HIP in comparison to age- and education-matched normal elderly controls (NEC. Participants completed a working memory n-back task (0- to 2-back in which sequences of digits were presented in visual-only (i.e., speech-reading, auditory-only (A-only, and AV conditions. Auditory event-related potentials (ERP were collected to assess the relationship between perceptual and working memory processing. The behavioural results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the HIP group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the HIP group showed a more robust AV benefit; however, the NECs showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the HIP to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.

  16. Mental fatigue and impaired response processes: event-related brain potentials in a Go/NoGo task.

    Science.gov (United States)

    Kato, Yuichiro; Endo, Hiroshi; Kizuka, Tomohiro

    2009-05-01

    The effects of mental fatigue on the availability of cognitive resources and associated response-related processes were examined using event-related brain potentials. Subjects performed a Go/NoGo task for 60 min. Reaction time, number of errors, and mental fatigue scores all significantly increased with time spent on the task. The NoGo-P3 amplitude significantly decreased with time on task, but the Go-P3 amplitude was not modulated. The amplitude of error-related negativity (Ne/ERN) also decreased with time on task. These results indicate that mental fatigue attenuates resource allocation and error monitoring for NoGo stimuli. The Go- and NoGo-P3 latencies both increased with time on task, indicative of a delay in stimulus evaluation time due to mental fatigue. NoGo-N2 latency increased with time on task, but NoGo-N2 amplitude was not modulated. The amplitude of response-locked lateralized readiness potential (LRP) significantly decreased with time on task. Mental fatigue appears to slows down the time course of response inhibition, and impairs the intensity of response execution.

  17. Cognitive mechanisms associated with auditory sensory gating

    Science.gov (United States)

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  18. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  19. Relative contributions of visual and auditory spatial representations to tactile localization.

    Science.gov (United States)

    Noel, Jean-Paul; Wallace, Mark

    2016-02-01

    Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities - with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch. Copyright © 2016

  20. Cortical evoked potentials to an auditory illusion: binaural beats.

    Science.gov (United States)

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi

    2009-08-01

    To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.

  1. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  2. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  3. Auditory beat stimulation and its effects on cognition and mood states

    Directory of Open Access Journals (Sweden)

    Leila eChaieb

    2015-05-01

    Full Text Available Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood-states. Here we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation. We have summarized relevant studies investigating the neurophysiological changes related to auditory beat stimulation and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural beat stimulation, we then discuss the role of monaural and binaural beat frequencies in cognition and mood-states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of auditory beat stimulation.

  4. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  5. The influence of visual information on auditory processing in individuals with congenital amusia: An ERP study.

    Science.gov (United States)

    Lu, Xuejing; Ho, Hao T; Sun, Yanan; Johnson, Blake W; Thompson, William F

    2016-07-15

    While most normal hearing individuals can readily use prosodic information in spoken language to interpret the moods and feelings of conversational partners, people with congenital amusia report that they often rely more on facial expressions and gestures, a strategy that may compensate for deficits in auditory processing. In this investigation, we used EEG to examine the extent to which individuals with congenital amusia draw upon visual information when making auditory or audio-visual judgments. Event-related potentials (ERP) were elicited by a change in pitch (up or down) between two sequential tones paired with a change in spatial position (up or down) between two visually presented dots. The change in dot position was either congruent or incongruent with the change in pitch. Participants were asked to judge (1) the direction of pitch change while ignoring the visual information (AV implicit task), and (2) whether the auditory and visual changes were congruent (AV explicit task). In the AV implicit task, amusic participants performed significantly worse in the incongruent condition than control participants. ERPs showed an enhanced N2-P3 response to incongruent AV pairings for control participants, but not for amusic participants. However when participants were explicitly directed to detect AV congruency, both groups exhibited enhanced N2-P3 responses to incongruent AV pairings. These findings indicate that amusics are capable of extracting information from both modalities in an AV task, but are biased to rely on visual information when it is available, presumably because they have learned that auditory information is unreliable. We conclude that amusic individuals implicitly draw upon visual information when judging auditory information, even though they have the capacity to explicitly recognize conflicts between these two sensory channels. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    DEFF Research Database (Denmark)

    Mehraei, Golbarg; Paredes Gallardo, Andreu; Shinn-Cunningham, Barbara G.

    2017-01-01

    In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels...... and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave......-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low...

  7. Response inhibition in borderline personality disorder: event-related potentials in a Go/Nogo task.

    Science.gov (United States)

    Ruchsow, M; Groen, G; Kiefer, M; Buchheim, A; Walter, H; Martius, P; Reiter, M; Hermle, L; Spitzer, M; Ebert, D; Falkenstein, M

    2008-01-01

    Borderline personality disorder (BPD) has been related to a dysfunction of anterior cingulate cortex, amygdala, and prefrontal cortex and has been associated clinically with impulsivity, affective instability, and significant interpersonal distress. We examined 17 patients with BPD and 17 age-, sex-, and education matched control participants with no history of Axis I or II psychopathology using event-related potentials (ERPs). Participants performed a hybrid flanker-Go/Nogo task while multichannel EEG was recorded. Our study focused on two ERP components: the Nogo-N2 and the Nogo-P3, which have been discussed in the context of response inhibition and response conflict. ERPs were computed on correct Go trials (button press) and correct Nogo trials (no button press), separately. Groups did not differ with regard to the Nogo-N2. However, BPD patients showed reduced Nogo-P3 amplitudes. For the entire group (n = 34) we found a negative correlation with the Barratt Impulsiveness Scale (BIS-10) and Becks's depression inventory (BDI). The present study is the first to examine Nogo-N2 and Nogo-P3 in BPD and provides further evidence for impaired response inhibition in BPD patients.

  8. The absence of later wave components in auditory brainstem responses as an initial manifestation of type 2 Gaucher disease.

    Science.gov (United States)

    Okubo, Yusuke; Goto, Masahiro; Sakakibara, Hiroshi; Terakawa, Toshiro; Kaneko, Takashi; Miyama, Sahoko

    2014-12-01

    Type 2 Gaucher disease is the most severe neuronopathic form of Gaucher disease and is characterized by severe neurodegeneration with brainstem involvement and organ failure. Prediction or diagnosis of type 2 Gaucher disease before the development of neurological complications is difficult. A 5-month-old female infant presented with deafness without other neurological abnormalities. Auditory brainstem response analysis revealed the absence of later wave components. Two months later, muscular rigidity became evident, followed by the development of opisthotonus and strabismus. Rapid progression of splenomegaly led to the diagnosis of type 2 Gaucher disease. Abnormal auditory brainstem response findings may already exist before the development of severe brainstem abnormalities such as muscular rigidity and opisthotonus in type 2 Gaucher disease. When patients present with deafness and absent later wave components on auditory brainstem response, type 2 Gaucher disease should be included in the differential diagnosis even in the absence of other neurological abnormalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the

  10. EEG signatures accompanying auditory figure-ground segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István

    2016-11-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.

  11. EEG signatures accompanying auditory figure-ground segregation

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István

    2017-01-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185

  12. Neural dynamics underlying attentional orienting to auditory representations in short-term memory.

    Science.gov (United States)

    Backer, Kristina C; Binns, Malcolm A; Alain, Claude

    2015-01-21

    Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.

  13. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  14. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  15. Auditory Brainstem Response Wave Amplitude Characteristics as a Diagnostic Tool in Children with Speech Delay with Unknown Causes

    Directory of Open Access Journals (Sweden)

    Susan Abadi

    2016-09-01

    Full Text Available Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group and 261 age- and sex-matched normally developing children (control group. Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V than did the control group (P=0.001. These amplitudes were significantly reduced after 1 year (P=0.001; however, they were still significantly higher than those of the control group (P=0.001. The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients’ improvement after treatment.

  16. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    Science.gov (United States)

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant

  17. Multiple time scales of adaptation in auditory cortex neurons.

    Science.gov (United States)

    Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel

    2004-11-17

    Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.

  18. Development of a Comprehensive Blast-Related Auditory Injury Database (BRAID)

    Science.gov (United States)

    2016-05-01

    servicemembers included in the Blast-Related Auditory Injury Database. * Training injuries, accidents, and other noncombat injuries. †3,452 injuries...medications, exposures to ototoxic chemicals, recreational noise exposure, and other forms of temporary and persistent threshold shift. Combat marines...AC, Vecchiotti M, Kujawa SG, Lee DJ, Quesnel AM. Otologic outcomes after blast injury: The Boston Marathon experience. Otol Neurotol. 2014; 35(10

  19. Touching lips and hearing fingers: effector-specific congruency between tactile and auditory stimulation modulates N1 amplitude and alpha desynchronization.

    Science.gov (United States)

    Shen, Guannan; Meltzoff, Andrew N; Marshall, Peter J

    2018-01-01

    Understanding the interactions between audition and sensorimotor processes is of theoretical importance, particularly in relation to speech processing. Although one current focus in this area is on interactions between auditory perception and the motor system, there has been less research on connections between the auditory and somatosensory modalities. The current study takes a novel approach to this omission by examining specific auditory-tactile interactions in the context of speech and non-speech sound production. Electroencephalography was used to examine brain responses when participants were presented with speech syllables (a bilabial sound /pa/ and a non-labial sound /ka/) or finger-snapping sounds that were simultaneously paired with tactile stimulation of either the lower lip or the right middle finger. Analyses focused on the sensory-evoked N1 in the event-related potential and the extent of alpha band desynchronization elicited by the stimuli. N1 amplitude over fronto-central sites was significantly enhanced when the bilabial /pa/ sound was paired with tactile lip stimulation and when the finger-snapping sound was paired with tactile stimulation of the finger. Post-stimulus alpha desynchronization at central sites was also enhanced when the /pa/ sound was accompanied by tactile stimulation of the lip. These novel findings indicate that neural aspects of somatosensory-auditory interactions are influenced by the congruency between the location of the bodily touch and the bodily origin of a perceived sound.

  20. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  1. Towards an auditory account of speech rhythm: application of a model of the auditory 'primal sketch' to two multi-language corpora.

    Science.gov (United States)

    Lee, Christopher S; Todd, Neil P McAngus

    2004-10-01

    The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.

  2. Event-related brain potentials reflect traces of echoic memory in humans.

    Science.gov (United States)

    Winkler, I; Reinikainen, K; Näätänen, R

    1993-04-01

    In sequences of identical auditory stimuli, infrequent deviant stimuli elicit an event-related brain potential component called mismatch negativity (MMN). MMN is presumed to reflect the existence of a memory trace of the frequent stimulus at the moment of presentation of the infrequent stimulus. This hypothesis was tested by applying the recognition-masking paradigm of cognitive psychology. In this paradigm, a masking sound presented shortly before or after a test stimulus diminishes the recognition memory of this stimulus, the more so the shorter the interval between the test and masking stimuli. This interval was varied in the present study. It was found that the MMN amplitude strongly correlated with the subject's ability to discriminate between frequent and infrequent stimuli. This result strongly suggests that MMN provides a measure for a trace of sensory memory, and further, that with MMN, this memory can be studied without performance-related distortions.

  3. Impact of Aging on the Auditory System and Related Cognitive Functions: A Narrative Review

    Directory of Open Access Journals (Sweden)

    Dona M. P. Jayakody

    2018-03-01

    Full Text Available Age-related hearing loss (ARHL, presbycusis, is a chronic health condition that affects approximately one-third of the world's population. The peripheral and central hearing alterations associated with age-related hearing loss have a profound impact on perception of verbal and non-verbal auditory stimuli. The high prevalence of hearing loss in the older adults corresponds to the increased frequency of dementia in this population. Therefore, researchers have focused their attention on age-related central effects that occur independent of the peripheral hearing loss as well as central effects of peripheral hearing loss and its association with cognitive decline and dementia. Here we review the current evidence for the age-related changes of the peripheral and central auditory system and the relationship between hearing loss and pathological cognitive decline and dementia. Furthermore, there is a paucity of evidence on the relationship between ARHL and established biomarkers of Alzheimer's disease, as the most common cause of dementia. Such studies are critical to be able to consider any causal relationship between dementia and ARHL. While this narrative review will examine the pathophysiological alterations in both the peripheral and central auditory system and its clinical implications, the question remains unanswered whether hearing loss causes cognitive impairment or vice versa.

  4. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.

    Science.gov (United States)

    Joanisse, Marc F; DeSouza, Diedre D

    2014-01-01

    Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

  6. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  7. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  8. Auditory verbal memory and psychosocial symptoms are related in children with idiopathic epilepsy.

    Science.gov (United States)

    Schaffer, Yael; Ben Zeev, Bruria; Cohen, Roni; Shuper, Avinoam; Geva, Ronny

    2015-07-01

    Idiopathic epilepsies are considered to have relatively good prognoses and normal or near normal developmental outcomes. Nevertheless, accumulating studies demonstrate memory and psychosocial deficits in this population, and the prevalence, severity and relationships between these domains are still not well defined. We aimed to assess memory, psychosocial function, and the relationships between these two domains among children with idiopathic epilepsy syndromes using an extended neuropsychological battery and psychosocial questionnaires. Cognitive abilities, neuropsychological performance, and socioemotional behavior of 33 early adolescent children, diagnosed with idiopathic epilepsy, ages 9-14years, were assessed and compared with 27 age- and education-matched healthy controls. Compared to controls, patients with stabilized idiopathic epilepsy exhibited higher risks for short-term memory deficits (auditory verbal and visual) (pmemory deficits (plong-term memory deficits (pmemory deficits was related to severity of psychosocial symptoms among the children with epilepsy but not in the healthy controls. Results suggest that deficient auditory verbal memory may be compromising psychosocial functioning in children with idiopathic epilepsy, possibly underscoring that cognitive variables, such as auditory verbal memory, should be assessed and treated in this population to prevent secondary symptoms. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  10. A novel method for extraction of neural response from single channel cochlear implant auditory evoked potentials.

    Science.gov (United States)

    Sinkiewicz, Daniel; Friesen, Lendra; Ghoraani, Behnaz

    2017-02-01

    Cortical auditory evoked potentials (CAEP) are used to evaluate cochlear implant (CI) patient auditory pathways, but the CI device produces an electrical artifact, which obscures the relevant information in the neural response. Currently there are multiple methods, which attempt to recover the neural response from the contaminated CAEP, but there is no gold standard, which can quantitatively confirm the effectiveness of these methods. To address this crucial shortcoming, we develop a wavelet-based method to quantify the amount of artifact energy in the neural response. In addition, a novel technique for extracting the neural response from single channel CAEPs is proposed. The new method uses matching pursuit (MP) based feature extraction to represent the contaminated CAEP in a feature space, and support vector machines (SVM) to classify the components as normal hearing (NH) or artifact. The NH components are combined to recover the neural response without artifact energy, as verified using the evaluation tool. Although it needs some further evaluation, this approach is a promising method of electrical artifact removal from CAEPs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  12. Exploration of auditory P50 gating in schizophrenia by way of difference waves

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2006-01-01

    potentials but here this method along with low frequency filtering is applied exploratory on auditory P50 gating data, previously analyzed in the standard format (reported in Am J Psychiatry 2003, 160:2236-8). The exploration was motivated by the observation during visual peak detection that the AEP waveform......Electroencephalographic measures of information processing encompass both mid-latency evoked potentials like the pre-attentive auditory P50 potential and a host of later more cognitive components like P300 and N400.Difference waves have mostly been employed in studies of later event related...

  13. [Effect of Electroacupuncture on Expression of Catechol-O-methyltransferase in the Inferior Colliculus and Auditory Cortex in Age-related Hearing Loss Guinea Pigs].

    Science.gov (United States)

    Liu, Shu-Yun; Deng, Li-Qiang; Yang, Ye; Yin, Ze-Deng

    2017-04-25

    To observe the expression of catechol-O-methyltransferase (COMT) in inferior colliculus and auditory cortex of guinea pigs with age-related hearing loss(AHL) induced by D-galactose, so as to explore the possible mechanism of electroacupuncture(EA) underlying preventing AHL. Thirty 3-month-old guinea pigs were randomly divided into control group, model group and EA group( n =10 in each group), and ten 18-month-old guinea pigs were allocated as elderly group. The AHL model was established by subcutaneous injection of D-galactose. EA was applied to bilateral "Yifeng"(SJ 17) and "Tinggong"(SI 19) for 15 min in the EA group while modeling, once daily for 6 weeks. After treatment, the latency of auditory brainstem response(ABR) Ⅲ wave was measured by a brain-stem evoked potentiometer. The expressions of COMT in the inferior colliculus and auditory cortex were detected by Western blot. Compared with the control group, the latencies of ABR Ⅲ wave were significantly prolonged and the expressions of COMT in the inferior colliculus and auditory cortex were significantly decreased in the model group and the elderly group( P guinea pigs, which may contribute to its effect in up-regulating the expression of COMT in the inferior colliculus and auditory cortex.

  14. Examining Age-Related Differences in Auditory Attention Control Using a Task-Switching Procedure

    OpenAIRE

    Vera Lawo; Iring Koch

    2014-01-01

    Objectives. Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex.

  15. Cardiac autonomic responses induced by mental tasks and the influence of musical auditory stimulation.

    Science.gov (United States)

    Barbosa, Juliana Cristina; Guida, Heraldo L; Fontes, Anne M G; Antonio, Ana M S; de Abreu, Luiz Carlos; Barnabé, Viviani; Marcomini, Renata S; Vanderlei, Luiz Carlos M; da Silva, Meire L; Valenti, Vitor E

    2014-08-01

    We investigated the acute effects of musical auditory stimulation on cardiac autonomic responses to a mental task in 28 healthy men (18-22 years old). In the control protocol (no music), the volunteers remained at seated rest for 10 min and the test was applied for five minutes. After the end of test the subjects remained seated for five more minutes. In the music protocol, the volunteers remained at seated rest for 10 min, then were exposed to music for 10 min; the test was then applied over five minutes, and the subjects remained seated for five more minutes after the test. In the control and music protocols the time domain and frequency domain indices of heart rate variability remained unchanged before, during and after the test. We found that musical auditory stimulation with baroque music did not influence cardiac autonomic responses to the mental task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Event-related fields evoked by vocal response inhibition: a comparison of younger and older adults.

    Science.gov (United States)

    Castro-Meneses, Leidy J; Johnson, Blake W; Sowman, Paul F

    2016-06-01

    The current study examined event-related fields (ERFs) evoked by vocal response inhibition in a stimulus-selective stop-signal task. We compared inhibition-related ERFs across a younger and an older group of adults. Behavioural results revealed that stop-signal reaction times (RTs), go-RTs, ignore-stop RTs and failed stop RTs were longer in the older, relative to the younger group by 38, 123, 149 and 116 ms, respectively. The amplitude of the ERF M2 peak (approximately 200 ms after the stop signal) evoked on successful stop trials was larger compared to that evoked on both failed stop and ignore-stop trials. The M4 peak (approximately 450 ms after stop signal) was of larger amplitude in both successful and failed stops compared to ignore-stop trials. In the older group, the M2, M3 and M4 peaks were smaller in amplitude and peaked later in time (by 24, 50 and 76 ms, respectively). We demonstrate that vocal response inhibition-related ERFs exhibit a similar temporal evolution to those previously described for manual response inhibition: an early peak at 200 ms (i.e. M2) that differentiates successful from failed stopping, and a later peak (i.e. M4) that is consistent with a neural marker of response checking and error processing. Across groups, our data support a more general decline of stimulus processing speed with age.

  17. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  18. Matrix metalloproteinase-9 deletion rescues auditory evoked potential habituation deficit in a mouse model of Fragile X Syndrome.

    Science.gov (United States)

    Lovelace, Jonathan W; Wen, Teresa H; Reinhard, Sarah; Hsu, Mike S; Sidhu, Harpreet; Ethell, Iryna M; Binder, Devin K; Razak, Khaleel A

    2016-05-01

    Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase-9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. Fragile X Syndrome (FXS) is the leading known genetic cause of autism spectrum disorders. Individuals with FXS show symptoms of auditory hypersensitivity. These symptoms may arise due to sustained neural responses to repeated sounds, but the underlying mechanisms remain unclear. For the first time, this study shows deficits

  19. Auditory and visual sustained attention in Down syndrome.

    Science.gov (United States)

    Faught, Gayle G; Conners, Frances A; Himmelberger, Zachary M

    2016-01-01

    Sustained attention (SA) is important to task performance and development of higher functions. It emerges as a separable component of attention during preschool and shows incremental improvements during this stage of development. The current study investigated if auditory and visual SA match developmental level or are particular challenges for youth with DS. Further, we sought to determine if there were modality effects in SA that could predict those seen in short-term memory (STM). We compared youth with DS to typically developing youth matched for nonverbal mental age and receptive vocabulary. Groups completed auditory and visual sustained attention to response tests (SARTs) and STM tasks. Results indicated groups performed similarly on both SARTs, even over varying cognitive ability. Further, within groups participants performed similarly on auditory and visual SARTs, thus SA could not predict modality effects in STM. However, SA did generally predict a significant portion of unique variance in groups' STM. Ultimately, results suggested both auditory and visual SA match developmental level in DS. Further, SA generally predicts STM, though SA does not necessarily predict the pattern of poor auditory relative to visual STM characteristic of DS. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Membrane potential dynamics of populations of cortical neurons during auditory streaming

    Science.gov (United States)

    Farley, Brandon J.

    2015-01-01

    How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts. PMID:26269558

  1. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences.

    Science.gov (United States)

    Grose, John H; Buss, Emily; Hall, Joseph W

    2017-01-01

    The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.

  2. Transitional Probabilities Are Prioritized over Stimulus/Pattern Probabilities in Auditory Deviance Detection: Memory Basis for Predictive Sound Processing.

    Science.gov (United States)

    Mittag, Maria; Takegata, Rika; Winkler, István

    2016-09-14

    Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our

  3. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment.

    Science.gov (United States)

    Frtusova, Jana B; Phillips, Natalie A

    2016-01-01

    This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.

  4. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  5. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.

  6. Test-retest reliability of speech-evoked auditory brainstem response in healthy children at a low sensation level.

    Science.gov (United States)

    Zakaria, Mohd Normani; Jalaei, Bahram

    2017-11-01

    Auditory brainstem responses evoked by complex stimuli such as speech syllables have been studied in normal subjects and subjects with compromised auditory functions. The stability of speech-evoked auditory brainstem response (speech-ABR) when tested over time has been reported but the literature is limited. The present study was carried out to determine the test-retest reliability of speech-ABR in healthy children at a low sensation level. Seventeen healthy children (6 boys, 11 girls) aged from 5 to 9 years (mean = 6.8 ± 3.3 years) were tested in two sessions separated by a 3-month period. The stimulus used was a 40-ms syllable /da/ presented at 30 dB sensation level. As revealed by pair t-test and intra-class correlation (ICC) analyses, peak latencies, peak amplitudes and composite onset measures of speech-ABR were found to be highly replicable. Compared to other parameters, higher ICC values were noted for peak latencies of speech-ABR. The present study was the first to report the test-retest reliability of speech-ABR recorded at low stimulation levels in healthy children. Due to its good stability, it can be used as an objective indicator for assessing the effectiveness of auditory rehabilitation in hearing-impaired children in future studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    Science.gov (United States)

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  8. Atypical auditory refractory periods in children from lower socio-economic status backgrounds: ERP evidence for a role of selective attention.

    Science.gov (United States)

    Stevens, Courtney; Paulsen, David; Yasen, Alia; Neville, Helen

    2015-02-01

    Previous neuroimaging studies indicate that lower socio-economic status (SES) is associated with reduced effects of selective attention on auditory processing. Here, we investigated whether lower SES is also associated with differences in a stimulus-driven aspect of auditory processing: the neural refractory period, or reduced amplitude response at faster rates of stimulus presentation. Thirty-two children aged 3 to 8 years participated, and were divided into two SES groups based on maternal education. Event-related brain potentials were recorded to probe stimuli presented at interstimulus intervals (ISIs) of 200, 500, or 1000 ms. These probes were superimposed on story narratives when attended and ignored, permitting a simultaneous experimental manipulation of selective attention. Results indicated that group differences in refractory periods differed as a function of attention condition. Children from higher SES backgrounds showed full neural recovery by 500 ms for attended stimuli, but required at least 1000 ms for unattended stimuli. In contrast, children from lower SES backgrounds showed similar refractory effects to attended and unattended stimuli, with full neural recovery by 500 ms. Thus, in higher SES children only, one functional consequence of selective attention is attenuation of the response to unattended stimuli, particularly at rapid ISIs, altering basic properties of the auditory refractory period. Together, these data indicate that differences in selective attention impact basic aspects of auditory processing in children from lower SES backgrounds. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Bilateral theta-burst magnetic stimulation influence on event-related brain potentials.

    Science.gov (United States)

    Pinto, Nuno; Duarte, Marta; Gonçalves, Helena; Silva, Ricardo; Gama, Jorge; Pato, Maria Vaz

    2018-01-01

    Theta-burst stimulation (TBS) can be a non-invasive technique to modulate cognitive functions, with promising therapeutic potential, but with some contradictory results. Event related potentials are used as a marker of brain deterioration and can be used to evaluate TBS-related cognitive performance, but its use remains scant. This study aimed to study bilateral inhibitory and excitatory TBS effects upon neurocognitive performance of young healthy volunteers, using the auditory P300' results. Using a double-blind sham-controlled study, 51 healthy volunteers were randomly assigned to five different groups, two submitted to either excitatory (iTBS) or inhibitory (cTBS) stimulation over the left dorsolateral pre-frontal cortex (DLPFC), two other actively stimulated the right DLPFC and finally a sham stimulation group. An oddball based auditory P300 was performed just before a single session of iTBS, cTBS or sham stimulation and repeated immediately after. P300 mean latency comparison between the pre- and post-TBS stimulation stages revealed significantly faster post stimulation latencies only when iTBS was performed on the left hemisphere (p = 0.003). Right and left hemisphere cTBS significantly delayed P300 latency (right p = 0.026; left p = 0.000). Multiple comparisons for N200 showed slower latencies after iTBS over the right hemisphere. No significant difference was found in amplitude variation. TBS appears to effectively influence neural networking involved in P300 formation, but effects seem distinct for iTBS vs cTBS and for the right or the left hemisphere. P300 evoked potentials can be an effective and practical tool to evaluate transcranial magnetic stimulation related outcomes.

  10. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  11. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    Science.gov (United States)

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  12. The auditory enhancement effect is not reflected in the 80-Hz auditory steady-state response.

    Science.gov (United States)

    Carcagno, Samuele; Plack, Christopher J; Portron, Arthur; Semal, Catherine; Demany, Laurent

    2014-08-01

    The perceptual salience of a target tone presented in a multitone background is increased by the presentation of a precursor sound consisting of the multitone background alone. It has been proposed that this "enhancement" phenomenon results from an effective amplification of the neural response to the target tone. In this study, we tested this hypothesis in humans, by comparing the auditory steady-state response (ASSR) to a target tone that was enhanced by a precursor sound with the ASSR to a target tone that was not enhanced. In order to record neural responses originating in the brainstem, the ASSR was elicited by amplitude modulating the target tone at a frequency close to 80 Hz. The results did not show evidence of an amplified neural response to enhanced tones. In a control condition, we measured the ASSR to a target tone that, instead of being perceptually enhanced by a precursor sound, was acoustically increased in level. This level increase matched the magnitude of enhancement estimated psychophysically with a forward masking paradigm in a previous experimental phase. We found that the ASSR to the tone acoustically increased in level was significantly greater than the ASSR to the tone enhanced by the precursor sound. Overall, our results suggest that the enhancement effect cannot be explained by an amplified neural response at the level of the brainstem. However, an alternative possibility is that brainstem neurons with enhanced responses do not contribute to the scalp-recorded ASSR.

  13. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  14. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  15. Usage of drip drops as stimuli in an auditory P300 BCI paradigm.

    Science.gov (United States)

    Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu

    2018-02-01

    Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p  < 0.05, Wilcoxon signed test; p  < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p  < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.

  16. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  17. Abnormal auditory mismatch response in tinnitus sufferers with high-frequency hearing loss is associated with subjective distress level

    Directory of Open Access Journals (Sweden)

    Berg Patrick

    2004-03-01

    Full Text Available Abstract Background Tinnitus is an auditory sensation frequently following hearing loss. After cochlear injury, deafferented neurons become sensitive to neighbouring intact edge-frequencies, guiding an enhanced central representation of these frequencies. As psychoacoustical data 123 indicate enhanced frequency discrimination ability for edge-frequencies that may be related to a reorganization within the auditory cortex, the aim of the present study was twofold: 1 to search for abnormal auditory mismatch responses in tinnitus sufferers and 2 relate these to subjective indicators of tinnitus. Results Using EEG-mismatch negativity, we demonstrate abnormalities (N = 15 in tinnitus sufferers that are specific to frequencies located at the audiometrically normal lesion-edge as compared to normal hearing controls (N = 15. Groups also differed with respect to the cortical locations of mismatch responsiveness. Sources in the 90–135 ms latency window were generated in more anterior brain regions in the tinnitus group. Both measures of abnormality correlated with emotional-cognitive distress related to tinnitus (r ~ .76. While these two physiological variables were uncorrelated in the control group, they were correlated in the tinnitus group (r = .72. Concerning relationships with parameters of hearing loss (depth and slope, slope turned out to be an important variable. Generally, the steeper the hearing loss is the less distress related to tinnitus was reported. The associations between slope and the relevant neurophysiological variables are in agreement with this finding. Conclusions The present study is the first to show near-to-complete separation of tinnitus sufferers from a normal hearing control group based on neurophysiological variables. The finding of lesion-edge specific effects and associations with slope of hearing loss corroborates the assumption that hearing loss is the basis for tinnitus development. It is likely that some central

  18. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object.

    Science.gov (United States)

    Smith, Nicholas A; Folland, Nicole A; Martinez, Diana M; Trainor, Laurel J

    2017-07-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain, Theunissen, Chevalier, Batty, & Taylor, 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Binaural interaction in the auditory brainstem response: a normative study.

    Science.gov (United States)

    Van Yper, Lindsey N; Vermeire, Katrien; De Vel, Eddy F J; Battmer, Rolf-Dieter; Dhooge, Ingeborg J M

    2015-04-01

    Binaural interaction can be investigated using auditory evoked potentials. A binaural interaction component can be derived from the auditory brainstem response (ABR-BIC) and is considered evidence for binaural interaction at the level of the brainstem. Although click ABR-BIC has been investigated thoroughly, data on 500 Hz tone-burst (TB) ABR-BICs are scarce. In this study, characteristics of click and 500 Hz TB ABR-BICs are described. Furthermore, reliability of both click and 500 Hz TB ABR-BIC are investigated. Eighteen normal hearing young adults (eight women, ten men) were included. ABRs were recorded in response to clicks and 500 Hz TBs. ABR-BICs were derived by subtracting the binaural response from the sum of the monaural responses measured in opposite ears. Good inter-rater reliability is obtained for both click and 500 Hz TB ABR-BICs. The most reliable peak in click ABR-BIC occurs at a mean latency of 6.06 ms (SD 0.354 ms). Reliable 500 Hz TB ABR-BIC are obtained with a mean latency of 9.47 ms (SD 0.678 ms). Amplitudes are larger for 500 Hz TB ABR-BIC than for clicks. The most reliable peak in click ABR-BIC occurs at the downslope of wave V. Five hundred Hertz TB ABR-BIC is characterized by a broad positivity occurring at the level of wave V. The ABR-BIC is a useful technique to investigate binaural interaction in certain populations. Examples are bilateral hearing aid users, bilateral cochlear implant users and bimodal listeners. The latter refers to the combination of unilateral cochlear implantation and contralateral residual hearing. The majority of these patients have residual hearing in the low frequencies. The current study suggests that 500 Hz TB ABR-BIC may be a suitable technique to assess binaural interaction in this specific population of cochlear implant users. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. ERP evaluation of auditory sensory memory systems in adults with intellectual disability.

    Science.gov (United States)

    Ikeda, Kazunari; Hashimoto, Souichi; Hayashi, Akiko; Kanno, Atsushi

    2009-01-01

    Auditory sensory memory stage can be functionally divided into two subsystems; transient-detector system and permanent feature-detector system (Naatanen, 1992). We assessed these systems in persons with intellectual disability by measuring event-related potentials (ERPs) N1 and mismatch negativity (MMN), which reflect the two auditory subsystems, respectively. Added to these, P3a (an ERP reflecting stage after sensory memory) was evaluated. Either synthesized vowels or simple tones were delivered during a passive oddball paradigm to adults with and without intellectual disability. ERPs were recorded from midline scalp sites (Fz, Cz, and Pz). Relative to control group, participants with the disability exhibited greater N1 latency and less MMN amplitude. The results for N1 amplitude and MMN latency were basically comparable between both groups. IQ scores in participants with the disability revealed no significant relation with N1 and MMN measures, whereas the IQ scores tended to increase significantly as P3a latency reduced. These outcomes suggest that persons with intellectual disability might own discrete malfunctions for the two detector systems in auditory sensory-memory stage. Moreover, the processes following sensory memory might be partly related to a determinant of mental development.

  1. A frontal cortex event-related potential driven by the basal forebrain

    Science.gov (United States)

    Nguyen, David P; Lin, Shih-Chieh

    2014-01-01

    Event-related potentials (ERPs) are widely used in both healthy and neuropsychiatric conditions as physiological indices of cognitive functions. Contrary to the common belief that cognitive ERPs are generated by local activity within the cerebral cortex, here we show that an attention-related ERP in the frontal cortex is correlated with, and likely generated by, subcortical inputs from the basal forebrain (BF). In rats performing an auditory oddball task, both the amplitude and timing of the frontal ERP were coupled with BF neuronal activity in single trials. The local field potentials (LFPs) associated with the frontal ERP, concentrated in deep cortical layers corresponding to the zone of BF input, were similarly coupled with BF activity and consistently triggered by BF electrical stimulation within 5–10 msec. These results highlight the important and previously unrecognized role of long-range subcortical inputs from the BF in the generation of cognitive ERPs. DOI: http://dx.doi.org/10.7554/eLife.02148.001 PMID:24714497

  2. [Event-related potentials P₃₀₀ with memory function and psychopathology in first-episode paranoid schizophrenia].

    Science.gov (United States)

    Liu, Wei-bo; Chen, Qiao-zhen; Yin, Hou-min; Zheng, Lei-lei; Yu, Shao-hua; Chen, Yi-ping; Li, Hui-chun

    2011-11-01

    To investigate the variability of event-related potentials P(300) and the relationship with memory function/psychopathology in patients with first-episode paranoid schizophrenia. Thirty patients with first-episode paranoid schizophrenia (patient group) and twenty health subjects (control group) were enrolled in the study. The auditory event-related potentials P₃₀₀ at the scalp electrodes Cz, Pz and Wechsler Memory Scale (WMS) were examined in both groups, Positive And Negative Syndrome Scale (PANSS) was evaluated in patient group. In comparison with control group, patients had longer latency of P₃₀₀ [(390.6 ± 47.6)ms at Cz and (393.3 ± 50.1)ms at Pz] (Pparanoid schizophrenia has memory deficit, which can be evaluated comprehensively by P₃₀₀ and WMS. The longer latency of P₃₀₀ might be associated with the increased severity of first-episode paranoid schizophrenia.

  3. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  4. Sparse representation of sounds in the unanesthetized auditory cortex.

    Directory of Open Access Journals (Sweden)

    Tomás Hromádka

    2008-01-01

    Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.

  5. Exploring the use of tactile feedback in an ERP-based auditory BCI.

    Science.gov (United States)

    Schreuder, Martijn; Thurlings, Marieke E; Brouwer, Anne-Marie; Van Erp, Jan B F; Tangermann, Michael

    2012-01-01

    Giving direct, continuous feedback on a brain state is common practice in motor imagery based brain-computer interfaces (BCI), but has not been reported for BCIs based on event-related potentials (ERP), where feedback is only given once after a sequence of stimuli. Potentially, direct feedback could allow the user to adjust his strategy during a running trial to obtain the required response. In order to test the usefulness of such feedback, directionally congruent vibrotactile feedback was given during an online auditory BCI experiment. Users received either no feedback, short feedback pulses or continuous feedback. The feedback conditions showed reduced performance both on a behavioral task and in terms of classification accuracy. Several explanations are discussed that give interesting starting points for further research on this topic.

  6. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    Science.gov (United States)

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  7. A Comparison of Independent Event-Related Desynchronization Responses in Motor-Related Brain Areas to Movement Execution, Movement Imagery, and Movement Observation.

    Science.gov (United States)

    Duann, Jeng-Ren; Chiou, Jin-Chern

    2016-01-01

    Electroencephalographic (EEG) event-related desynchronization (ERD) induced by movement imagery or by observing biological movements performed by someone else has recently been used extensively for brain-computer interface-based applications, such as applications used in stroke rehabilitation training and motor skill learning. However, the ERD responses induced by the movement imagery and observation might not be as reliable as the ERD responses induced by movement execution. Given that studies on the reliability of the EEG ERD responses induced by these activities are still lacking, here we conducted an EEG experiment with movement imagery, movement observation, and movement execution, performed multiple times each in a pseudorandomized order in the same experimental runs. Then, independent component analysis (ICA) was applied to the EEG data to find the common motor-related EEG source activity shared by the three motor tasks. Finally, conditional EEG ERD responses associated with the three movement conditions were computed and compared. Among the three motor conditions, the EEG ERD responses induced by motor execution revealed the alpha power suppression with highest strengths and longest durations. The ERD responses of the movement imagery and movement observation only partially resembled the ERD pattern of the movement execution condition, with slightly better detectability for the ERD responses associated with the movement imagery and faster ERD responses for movement observation. This may indicate different levels of involvement in the same motor-related brain circuits during different movement conditions. In addition, because the resulting conditional EEG ERD responses from the ICA preprocessing came with minimal contamination from the non-related and/or artifactual noisy components, this result can play a role of the reference for devising a brain-computer interface using the EEG ERD features of movement imagery or observation.

  8. Encoding of Sucrose's Palatability in the Nucleus Accumbens Shell and Its Modulation by Exteroceptive Auditory Cues

    Directory of Open Access Journals (Sweden)

    Miguel Villavicencio

    2018-05-01

    Full Text Available Although the palatability of sucrose is the primary reason for why it is over consumed, it is not well understood how it is encoded in the nucleus accumbens shell (NAcSh, a brain region involved in reward, feeding, and sensory/motor transformations. Similarly, untouched are issues regarding how an external auditory stimulus affects sucrose palatability and, in the NAcSh, the neuronal correlates of this behavior. To address these questions in behaving rats, we investigated how food-related auditory cues modulate sucrose's palatability. The goals are to determine whether NAcSh neuronal responses would track sucrose's palatability (as measured by the increase in hedonically positive oromotor responses lick rate, sucrose concentration, and how it processes auditory information. Using brief-access tests, we found that sucrose's palatability was enhanced by exteroceptive auditory cues that signal the start and the end of a reward epoch. With only the start cue the rejection of water was accelerated, and the sucrose/water ratio was enhanced, indicating greater palatability. However, the start cue also fragmented licking patterns and decreased caloric intake. In the presence of both start and stop cues, the animals fed continuously and increased their caloric intake. Analysis of the licking microstructure confirmed that auditory cues (either signaling the start alone or start/stop enhanced sucrose's oromotor-palatability responses. Recordings of extracellular single-unit activity identified several distinct populations of NAcSh responses that tracked either the sucrose palatability responses or the sucrose concentrations by increasing or decreasing their activity. Another neural population fired synchronously with licking and exhibited an enhancement in their coherence with increasing sucrose concentrations. The population of NAcSh's Palatability-related and Lick-Inactive neurons were the most important for decoding sucrose's palatability. Only the Lick

  9. Development of auditory event-related potentials in infants prenatally exposed to methadone.

    Science.gov (United States)

    Paul, Jonathan A; Logan, Beth A; Krishnan, Ramesh; Heller, Nicole A; Morrison, Deborah G; Pritham, Ursula A; Tisher, Paul W; Troese, Marcia; Brown, Mark S; Hayes, Marie J

    2014-07-01

    Developmental features of the P2 auditory ERP in a change detection paradigm were examined in infants prenatally exposed to methadone. Opiate dependent pregnant women maintained on methadone replacement therapy were recruited during pregnancy (N = 60). Current and historical alcohol and substance use, SES, and psychiatric status were assessed with a maternal interview during the third trimester. Medical records were used to collect information regarding maternal medications, monthly urinalysis, and breathalyzer to confirm comorbid drug and alcohol exposures. Between birth and 4 months infant ERP change detection performance was evaluated on one occasion with the oddball paradigm (.2 probability oddball) using pure-tone stimuli (standard = 1 kHz and oddball = 2 kHz frequency) at midline electrode sites, Fz, Cz, Pz. Infant groups were examined in the following developmental windows: 4-15, 16-32, or 33-120 days PNA. Older groups showed increased P2 amplitude at Fz and effective change detection performance at P2 not seen in the newborn group. Developmental maturation of amplitude and stimulus discrimination for P2 has been reported in developing infants at all of the ages tested and data reported here in the older infants are consistent with typical development. However, it has been previously reported that the P2 amplitude difference is detectable in neonates; therefore, absence of a difference in P2 amplitude between stimuli in the 4-15 days group may represent impaired ERP performance by neonatal abstinence syndrome or prenatal methadone exposure. © 2013 Wiley Periodicals, Inc.

  10. Congenital Deafness Reduces, But Does Not Eliminate Auditory Responsiveness in Cat Extrastriate Visual Cortex.

    Science.gov (United States)

    Land, Rüdiger; Radecke, Jan-Ole; Kral, Andrej

    2018-04-01

    Congenital deafness not only affects the development of the auditory cortex, but also the interrelation between the visual and auditory system. For example, congenital deafness leads to visual modulation of the deaf auditory cortex in the form of cross-modal plasticity. Here we asked, whether congenital deafness additionally affects auditory modulation in the visual cortex. We demonstrate that auditory activity, which is normally present in the lateral suprasylvian visual areas in normal hearing cats, can also be elicited by electrical activation of the auditory system with cochlear implants. We then show that in adult congenitally deaf cats auditory activity in this region was reduced when tested with cochlear implant stimulation. However, the change in this area was small and auditory activity was not completely abolished despite years of congenital deafness. The results document that congenital deafness leads not only to changes in the auditory cortex but also affects auditory modulation of visual areas. However, the results further show a persistence of fundamental cortical sensory functional organization despite congenital deafness. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Reduced auditory efferent activity in childhood selective mutism.

    Science.gov (United States)

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  12. The effects of divided attention on auditory priming.

    Science.gov (United States)

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  13. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  14. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  15. Cognitive Training Enhances Auditory Attention Efficiency in Older Adults

    Directory of Open Access Journals (Sweden)

    Jennifer L. O’Brien

    2017-10-01

    Full Text Available Auditory cognitive training (ACT improves attention in older adults; however, the underlying neurophysiological mechanisms are still unknown. The present study examined the effects of ACT on the P3b event-related potential reflecting attention allocation (amplitude and speed of processing (latency during stimulus categorization and the P1-N1-P2 complex reflecting perceptual processing (amplitude and latency. Participants completed an auditory oddball task before and after 10 weeks of ACT (n = 9 or a no contact control period (n = 15. Parietal P3b amplitudes to oddball stimuli decreased at post-test in the trained group as compared to those in the control group, and frontal P3b amplitudes show a similar trend, potentially reflecting more efficient attentional allocation after ACT. No advantages for the ACT group were evident for auditory perceptual processing or speed of processing in this small sample. Our results provide preliminary evidence that ACT may enhance the efficiency of attention allocation, which may account for the positive impact of ACT on the everyday functioning of older adults.

  16. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  17. Auditory steady-state responses in cochlear implant users: Effect of modulation frequency and stimulation artifacts.

    Science.gov (United States)

    Gransier, Robin; Deprez, Hanne; Hofmann, Michael; Moonen, Marc; van Wieringen, Astrid; Wouters, Jan

    2016-05-01

    Previous studies have shown that objective measures based on stimulation with low-rate pulse trains fail to predict the threshold levels of cochlear implant (CI) users for high-rate pulse trains, as used in clinical devices. Electrically evoked auditory steady-state responses (EASSRs) can be elicited by modulated high-rate pulse trains, and can potentially be used to objectively determine threshold levels of CI users. The responsiveness of the auditory pathway of profoundly hearing-impaired CI users to modulation frequencies is, however, not known. In the present study we investigated the responsiveness of the auditory pathway of CI users to a monopolar 500 pulses per second (pps) pulse train modulated between 1 and 100 Hz. EASSRs to forty-three modulation frequencies, elicited at the subject's maximum comfort level, were recorded by means of electroencephalography. Stimulation artifacts were removed by a linear interpolation between a pre- and post-stimulus sample (i.e., blanking). The phase delay across modulation frequencies was used to differentiate between the neural response and a possible residual stimulation artifact after blanking. Stimulation artifacts were longer than the inter-pulse interval of the 500pps pulse train for recording electrodes ipsilateral to the CI. As a result the stimulation artifacts could not be removed by artifact removal on the bases of linear interpolation for recording electrodes ipsilateral to the CI. However, artifact-free responses could be obtained in all subjects from recording electrodes contralateral to the CI, when subject specific reference electrodes (Cz or Fpz) were used. EASSRs to modulation frequencies within the 30-50 Hz range resulted in significant responses in all subjects. Only a small number of significant responses could be obtained, during a measurement period of 5 min, that originate from the brain stem (i.e., modulation frequencies in the 80-100 Hz range). This reduced synchronized activity of brain stem

  18. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Impaired theta phase-resetting underlying auditory N1 suppression in chronic alcoholism.

    Science.gov (United States)

    Fuentemilla, Lluis; Marco-Pallarés, Josep; Gual, Antoni; Escera, Carles; Polo, Maria Dolores; Grau, Carles

    2009-02-18

    It has been suggested that chronic alcoholism may lead to altered neural mechanisms related to inhibitory processes. Here, we studied auditory N1 suppression phenomena (i.e. amplitude reduction with repetitive stimuli) in chronic alcoholic patients as an early-stage information-processing brain function involving inhibition by the analysis of the N1 event-related potential and time-frequency computation (spectral power and phase-resetting). Our results showed enhanced neural theta oscillatory phase-resetting underlying N1 generation in suppressed N1 event-related potential. The present findings suggest that chronic alcoholism alters neural oscillatory synchrony dynamics at very early stages of information processing.

  20. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    -speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    Science.gov (United States)

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. The pattern of auditory brainstem response wave V maturation in cochlear-implanted children.

    Science.gov (United States)

    Thai-Van, Hung; Cozma, Sebastian; Boutitie, Florent; Disant, François; Truy, Eric; Collet, Lionel

    2007-03-01

    Maturation of acoustically evoked brainstem responses (ABR) in hearing children is not complete at birth but rather continues over the first two years of life. In particular, it has been established that the decrease in ABR wave V latency can be modeled as the sum of two decaying exponential functions with respective time-constants of 4 and 50 weeks [Eggermont, J.J., Salamy, A., 1988a. Maturational time-course for the ABR in preterm and full term infants. Hear Res 33, 35-47; Eggermont, J.J., Salamy, A., 1988b. Development of ABR parameters in a preterm and a term born population. Ear Hear 9, 283-9]. Here, we investigated the maturation of electrically evoked auditory brainstem responses (EABR) in 55 deaf children who recovered hearing after cochlear implantation, and proposed a predictive model of EABR maturation depending on the onset of deafness. The pattern of EABR maturation over the first 2 years of cochlear implant use was compared with the normal pattern of ABR maturation in hearing children. Changes in EABR wave V latency over the 2 years following cochlear implant connection were analyzed in two groups of children. The first group (n=41) consisted of children with early-onset of deafness (mostly congenital), and the second (n=14) of children who had become profoundly deaf after 1 year of age. The modeling of changes in EABR wave V latency with time was based on the mean values from each of the two groups, allowing comparison of the rates of EABR maturation between groups. Differences between EABRs elicited at the basal and apical ends of the implant electrode array were also tested. There was no influence of age at implantation on the rate of wave V latency change. The main factor for EABR changes was the time in sound. Indeed, significant maturation was observed over the first 2 years of implant use only in the group with early-onset deafness. In this group maturation of wave V progressed as in the ABR model of [Eggermont, J.J., Salamy, A., 1988a

  4. Irrelevant Auditory and Visual Events Induce a Visual Attentional Blink

    NARCIS (Netherlands)

    Van der Burg, Erik; Nieuwenstein, Mark R.; Theeuwes, Jan; Olivers, Christian N. L.

    2013-01-01

    In the present study we investigated whether a task-irrelevant distractor can induce a visual attentional blink pattern. Participants were asked to detect only a visual target letter (A, B, or C) and to ignore the preceding auditory, visual, or audiovisual distractor. An attentional blink was

  5. The effect of automatic blink correction on auditory evoked potentials.

    Science.gov (United States)

    Korpela, Jussi; Vigário, Ricardo; Huotilainen, Minna

    2012-01-01

    The effects of blink correction on auditory event-related potential (ERP) waveforms is assessed. Two blink correction strategies are compared. ICA-SSP combines independent component analysis (ICA) with signal space projection (SSP) and ICA-EMD uses empirical mode decomposition (EMD) to improve the performance of the standard ICA method. Five voluntary subjects performed an auditory oddball task. The resulting ERPs are used to compare the two blink correction methods to each other and against blink rejection. The results suggest that both methods qualitatively preserve the ERP waveform but that they underestimate some of the peak amplitudes. ICA-EMD performs slightly better than ICA-SSP. In conclusion, the use of blink correction is justified, especially if blink rejection leads to severe data loss.

  6. The auditory brainstem response in two lizard species

    DEFF Research Database (Denmark)

    Brittan-Powell, Elizabeth F; Christensen-Dalsgaard, Jakob; Tang, Yezhong

    2010-01-01

    Although lizards have highly sensitive ears, it is difficult to condition them to sound, making standard psychophysical assays of hearing sensitivity impractical. This paper describes non-invasive measurements of the auditory brainstem response (ABR) in both Tokay geckos (Gekko gecko; nocturnal...... animals, known for their loud vocalizations) and the green anole (Anolis carolinensis, diurnal, non-vocal animals). Hearing sensitivity was measured in 5 geckos and 7 anoles. The lizards were sedated with isoflurane, and ABRs were measured at levels of 1 and 3% isoflurane. The typical ABR waveform......). Above 5 kHz, however, anoles were more than 20 dB more sensitive than geckos and showed a wider range of sensitivity (1-7 kHz). Generally, thresholds from ABR audiograms were comparable to those of small birds. Best hearing sensitivity, however, extended over a larger frequency range in lizards than...

  7. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  8. Auditory beat stimulation and its effects on cognition and mood States.

    Science.gov (United States)

    Chaieb, Leila; Wilpert, Elke Caroline; Reber, Thomas P; Fell, Juergen

    2015-01-01

    Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.

  9. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  10. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  11. Attention to affective pictures in closed head injury: event-related brain potentials and cardiac responses.

    Science.gov (United States)

    Solbakk, Anne-Kristin; Reinvang, Ivar; Svebak, Sven; Nielsen, Christopher S; Sundet, Kjetil

    2005-02-01

    We examined whether closed head injury patients show altered patterns of selective attention to stimulus categories that naturally evoke differential responses in healthy people. Self-reported rating and electrophysiological (event-related potentials [ERPs], heart rate [HR]) responses to affective pictures were studied in patients with mild head injury (n = 20; CT/MRI negative), in patients with predominantly frontal brain lesions (n = 12; CT/MRI confirmed), and in healthy controls (n = 20). Affective valence similarly modulated HR and ERP responses in all groups, but group differences occurred that were independent of picture valence. The attenuation of P3-slow wave amplitudes in the mild head injury group indicates a reduction in the engagement of attentional resources to the task. In contrast, the general enhancement of ERP amplitudes at occipital sites in the group with primarily frontal brain injury may reflect disinhibition of input at sensory receptive areas, possibly due to a deficit in top-down modulation performed by anterior control systems.

  12. Auditory Brainstem Responses and EMFs Generated by Mobile Phones.

    Science.gov (United States)

    Khullar, Shilpa; Sood, Archana; Sood, Sanjay

    2013-12-01

    There has been a manifold increase in the number of mobile phone users throughout the world with the current number of users exceeding 2 billion. However this advancement in technology like many others is accompanied by a progressive increase in the frequency and intensity of electromagnetic waves without consideration of the health consequences. The aim of our study was to advance our understanding of the potential adverse effects of GSM mobile phones on auditory brainstem responses (ABRs). 60 subjects were selected for the study and divided into three groups of 20 each based on their usage of mobile phones. Their ABRs were recorded and analysed for latency of waves I-V as well as interpeak latencies I-III, I-V and III-V (in ms). Results revealed no significant difference in the ABR parameters between group A (control group) and group B (subjects using mobile phones for maximum 30 min/day for 5 years). However the latency of waves was significantly prolonged in group C (subjects using mobile phones for 10 years for a maximum of 30 min/day) as compared to the control group. Based on our findings we concluded that long term exposure to mobile phones may affect conduction in the peripheral portion of the auditory pathway. However more research needs to be done to study the long term effects of mobile phones particularly of newer technologies like smart phones and 3G.

  13. The p300 event-related potential technique for libido assessment in women with hypoactive sexual desire disorder.

    Science.gov (United States)

    Vardi, Yoram; Sprecher, Elliot; Gruenwald, Ilan; Yarnitsky, David; Gartman, Irena; Granovsky, Yelena

    2009-06-01

    There is a need for an objective technique to assess the degree of hypoactive sexual desire disorder (HSDD). Recently, we described such a methodology (event-related potential technique [ERP]) based on recording of p300 electroencephalography (EEG) waves elicited by auditory stimuli during synchronous exposure to erotic films. To compare sexual interest of sexually healthy women to females with sexual dysfunction (FSD) using ERP, and to explore whether FSD women with and without HSDD would respond differently to two different types of erotic stimuli-films containing (I) or not containing (NI) sexual intercourse scenes. Twenty-two women with FSD, of which nine had HSDD only, and 30 sexually healthy women were assessed by the Female Sexual Functioning Index. ERP methodology was performed applying erotic NI or I films. Significant differences in percent of auditory p300 amplitude reduction (PR) in response to erotic stimuli within and between all three groups for each film type. PRs to each film type were similar in sexually healthy women (60.6% +/- 40.3 (NI) and 51.7% +/- 32.3 [I]), while in women with FSD, reduction was greater when viewing the NI vs. I erotic films (71.4% +/- 41.0 vs. 37.7% +/- 45.7; P = 0.0099). This difference was mainly due to the greater PR of the subgroup with HSDD in response to NI vs. I films (77.7% +/- 46.7 vs. 17.0% +/- 50.3) than in the FSD women without HSDD group or the sexually healthy women (67.5% +/- 38.7 vs. 50.4% +/- 39.4 respectively), P = 0.0084. For comparisons, we used the mixed-model one-way analysis of variance. Differences in neurophysiological response patterns between sexually healthy vs. sexually dysfunctional females may point to a specific inverse discrimination ability for sexually relevant information in the subgroup of women with HSDD. These findings suggest that the p300 ERP technique could be used as an objective quantitative tool for libido assessment in sexually dysfunctional women.

  14. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Neurophysiological correlates of response inhibition predict relapse in detoxified alcoholic patients: some preliminary evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Petit G

    2014-06-01

    Full Text Available Géraldine Petit, Agnieszka Cimochowska, Charles Kornreich, Catherine Hanak, Paul Verbanck, Salvatore CampanellaLaboratory of Psychological Medicine and Addictology, ULB Neuroscience Institute (UNI, Université Libre de Bruxelles (ULB, Brussels, BelgiumBackground: Alcohol dependence is a chronic relapsing disease. The impairment of response inhibition and alcohol-cue reactivity are the main cognitive mechanisms that trigger relapse. Despite the interaction suggested between the two processes, they have long been investigated as two different lines of research. The present study aimed to investigate the interaction between response inhibition and alcohol-cue reactivity and their potential link with relapse.Materials and methods: Event-related potentials were recorded during a variant of a “go/no-go” task. Frequent and rare stimuli (to be inhibited were superimposed on neutral, nonalcohol-related, and alcohol-related contexts. The task was administered following a 3-week detoxification course. Relapse outcome was measured after 3 months, using self-reported abstinence. There were 27 controls (seven females and 27 patients (seven females, among whom 13 relapsed during the 3-month follow-up period. The no-go N2, no-go P3, and the “difference” wave (P3d were examined with the aim of linking neural correlates of response inhibition on alcohol-related contexts to the observed relapse rate.Results: Results showed that 1 at the behavioral level, alcohol-dependent patients made significantly more commission errors than controls (P<0.001, independently of context; 2 through the subtraction no-go P3 minus go P3, this inhibition deficit was neurophysiologically indexed in patients with greater P3d amplitudes (P=0.034; and 3 within the patient group, increased P3d amplitude enabled us to differentiate between future relapsers and nonrelapsers (P=0.026.Conclusion: Our findings suggest that recently detoxified alcoholics are characterized by poorer

  16. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  17. Auditory evoked functions in ground crew working in high noise environment of Mumbai airport.

    Science.gov (United States)

    Thakur, L; Anand, J P; Banerjee, P K

    2004-10-01

    The continuous exposure to the relatively high level of noise in the surroundings of an airport is likely to affect the central pathway of the auditory system as well as the cognitive functions of the people working in that environment. The Brainstem Auditory Evoked Responses (BAER), Mid Latency Response (MLR) and P300 response of the ground crew employees working in Mumbai airport were studied to evaluate the effects of continuous exposure to high level of noise of the surroundings of the airport on these responses. BAER, P300 and MLR were recorded by using a Nicolet Compact-4 (USA) instrument. Audiometry was also monitored with the help of GSI-16 Audiometer. There was a significant increase in the peak III latency of the BAER in the subjects exposed to noise compared to controls with no change in their P300 values. The exposed group showed hearing loss at different frequencies. The exposure to the high level of noise caused a considerable decline in the auditory conduction upto the level of the brainstem with no significant change in conduction in the midbrain, subcortical areas, auditory cortex and associated areas. There was also no significant change in cognitive function as measured by P300 response.

  18. Non-linear laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-01-01

    Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two to...

  19. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Construction and Updating of Event Models in Auditory Event Processing

    Science.gov (United States)

    Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-01-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…

  1. Maturation of auditory neural processes in autism spectrum disorder - A longitudinal MEG study.

    Science.gov (United States)

    Port, Russell G; Edgar, J Christopher; Ku, Matthew; Bloy, Luke; Murray, Rebecca; Blaskey, Lisa; Levy, Susan E; Roberts, Timothy P L

    2016-01-01

    Individuals with autism spectrum disorder (ASD) show atypical brain activity, perhaps due to delayed maturation. Previous studies examining the maturation of auditory electrophysiological activity have been limited due to their use of cross-sectional designs. The present study took a first step in examining magnetoencephalography (MEG) evidence of abnormal auditory response maturation in ASD via the use of a longitudinal design. Initially recruited for a previous study, 27 children with ASD and nine typically developing (TD) children, aged 6- to 11-years-old, were re-recruited two to five years later. At both timepoints, MEG data were obtained while participants passively listened to sinusoidal pure-tones. Bilateral primary/secondary auditory cortex time domain (100 ms evoked response latency (M100)) and spectrotemporal measures (gamma-band power and inter-trial coherence (ITC)) were examined. MEG measures were also qualitatively examined for five children who exhibited "optimal outcome", participants who were initially on spectrum, but no longer met diagnostic criteria at follow-up. M100 latencies were delayed in ASD versus TD at the initial exam (~ 19 ms) and at follow-up (~ 18 ms). At both exams, M100 latencies were associated with clinical ASD severity. In addition, gamma-band evoked power and ITC were reduced in ASD versus TD. M100 latency and gamma-band maturation rates did not differ between ASD and TD. Of note, the cohort of five children that demonstrated "optimal outcome" additionally exhibited M100 latency and gamma-band activity mean values in-between TD and ASD at both timepoints. Though justifying only qualitative interpretation, these "optimal outcome" related data are presented here to motivate future studies. Children with ASD showed perturbed auditory cortex neural activity, as evidenced by M100 latency delays as well as reduced transient gamma-band activity. Despite evidence for maturation of these responses in ASD, the neural abnormalities

  2. MR and genetics in schizophrenia: Focus on auditory hallucinations

    International Nuclear Information System (INIS)

    Aguilar, Eduardo Jesus; Sanjuan, Julio; Garcia-Marti, Gracian; Lull, Juan Jose; Robles, Montserrat

    2008-01-01

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented

  3. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  4. Relations between Rainfall and Postfire Debris-Flow- and Flood-Event Magnitudes for Emergency-Response Planning, San Gabriel Mountains, Southern California, USA

    Science.gov (United States)

    Cannon, Susan; Collins, Larry; Boldt, Eric; Staley, Dennis

    2010-05-01

    .46defines the rainfall conditions above which Magnitude III events can be expected. Rainfall trigger-event magnitude relations are linked with potential emergency-response actions in the form of an emergency-response decision chart. The chart leads a user through steps to 1) determine potential event magnitudes, and 2) identify possible evacuation and resource-deployment levels as a function of either individual storm forecasts or measured precipitation during storms. The ability to use this information in the planning and response decision-making process may result in significant financial savings and increased safety for both the public and emergency responders.

  5. An Event Related Field Study of Rapid Grammatical Plasticity in Adult Second-Language Learners

    Science.gov (United States)

    Bastarrika, Ainhoa; Davidson, Douglas J.

    2017-01-01

    The present study used magnetoencephalography (MEG) to investigate how Spanish adult learners of Basque respond to morphosyntactic violations after a short period of training on a small fragment of Basque grammar. Participants (n = 17) were exposed to violation and control phrases in three phases (pretest, training, generalization-test). In each phase participants listened to short Basque phrases and they judged whether they were correct or incorrect. During the pre-test and generalization-test, participants did not receive any feedback. During the training blocks feedback was provided after each response. We also ran two Spanish control blocks before and after training. We analyzed the event-related magnetic- field (ERF) recorded in response to a critical word during all three phases. In the pretest, classification was below chance and we found no electrophysiological differences between violation and control stimuli. Then participants were explicitly taught a Basque grammar rule. From the first training block participants were able to correctly classify control and violation stimuli and an evoked violation response was present. Although the timing of the electrophysiological responses matched participants' L1 effect, the effect size was smaller for L2 and the topographical distribution differed from the L1. While the L1 effect was bilaterally distributed on the auditory sensors, the L2 effect was present at right frontal sensors. During training blocks two and three, the violation-control effect size increased and the topography evolved to a more L1-like pattern. Moreover, this pattern was maintained in the generalization test. We conclude that rapid changes in neuronal responses can be observed in adult learners of a simple morphosyntactic rule, and that native-like responses can be achieved at least in small fragments of second language. PMID:28174530

  6. Mining known attack patterns from security-related events

    Directory of Open Access Journals (Sweden)

    Nicandro Scarabeo

    2015-10-01

    Full Text Available Managed Security Services (MSS have become an essential asset for companies to have in order to protect their infrastructure from hacking attempts such as unauthorized behaviour, denial of service (DoS, malware propagation, and anomalies. A proliferation of attacks has determined the need for installing more network probes and collecting more security-related events in order to assure the best coverage, necessary for generating incident responses. The increase in volume of data to analyse has created a demand for specific tools that automatically correlate events and gather them in pre-defined scenarios of attacks. Motivated by Above Security, a specialized company in the sector, and by National Research Council Canada (NRC, we propose a new data mining system that employs text mining techniques to dynamically relate security-related events in order to reduce analysis time, increase the quality of the reports, and automatically build correlated scenarios.

  7. Dynamics of large-scale cortical interactions at high gamma frequencies during word production: event related causality (ERC) analysis of human electrocorticography (ECoG).

    Science.gov (United States)

    Korzeniewska, Anna; Franaszczuk, Piotr J; Crainiceanu, Ciprian M; Kuś, Rafał; Crone, Nathan E

    2011-06-15

    Intracranial EEG studies in humans have shown that functional brain activation in a variety of functional-anatomic domains of human cortex is associated with an increase in power at a broad range of high gamma (>60Hz) frequencies. Although these electrophysiological responses are highly specific for the location and timing of cortical processing and in animal recordings are highly correlated with increased population firing rates, there has been little direct empirical evidence for causal interactions between different recording sites at high gamma frequencies. Such causal interactions are hypothesized to occur during cognitive tasks that activate multiple brain regions. To determine whether such causal interactions occur at high gamma frequencies and to investigate their functional significance, we used event-related causality (ERC) analysis to estimate the dynamics, directionality, and magnitude of event-related causal interactions using subdural electrocorticography (ECoG) recorded during two word production tasks: picture naming and auditory word repetition. A clinical subject who had normal hearing but was skilled in American Signed Language (ASL) provided a unique opportunity to test our hypothesis with reference to a predictable pattern of causal interactions, i.e. that language cortex interacts with different areas of sensorimotor cortex during spoken vs. signed responses. Our ERC analyses confirmed this prediction. During word production with spoken responses, perisylvian language sites had prominent causal interactions with mouth/tongue areas of motor cortex, and when responses were gestured in sign language, the most prominent interactions involved hand and arm areas of motor cortex. Furthermore, we found that the sites from which the most numerous and prominent causal interactions originated, i.e. sites with a pattern of ERC "divergence", were also sites where high gamma power increases were most prominent and where electrocortical stimulation mapping

  8. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  9. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  10. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  11. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  12. Hearing with Two Ears: Evidence for Cortical Binaural Interaction during Auditory Processing.

    Science.gov (United States)

    Henkin, Yael; Yaar-Soffer, Yifat; Givon, Lihi; Hildesheimer, Minka

    2015-04-01

    Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Fifteen (21-32 yr) young adults (6 females) with normal hearing sensitivity. By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of

  13. Construction and updating of event models in auditory event processing.

    Science.gov (United States)

    Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-02-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    Science.gov (United States)

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  15. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    Directory of Open Access Journals (Sweden)

    Stephan eGetzmann

    2014-12-01

    Full Text Available Speech understanding in complex and dynamic listening environments requires (a auditory scene analysis, namely auditory object formation and segregation, and (b allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called cocktail-party problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  16. Witnessing traumatic events and post-traumatic stress disorder: Insights from an animal model.

    Science.gov (United States)

    Patki, Gaurav; Salvi, Ankita; Liu, Hesong; Salim, Samina

    2015-07-23

    It is becoming increasingly recognized that post-traumatic stress disorder (PTSD) can be acquired vicariously from witnessing traumatic events. Recently, we published an animal model called the "Trauma witness model" (TWM) which mimics PTSD-like symptoms in rats from witnessing daily traumatic events (social defeat of cage mate) [14]. Our TWM does not result in any physical injury. This is a major procedural advantage over the typical intruder paradigm in which it is difficult to delineate the inflammatory response of tissue injury and the response elicited from emotional distress. Using TWM paradigm, we examined behavioral and cognitive effects in rats [14] however, the long-term persistence of PTSD-like symptoms or a time-course of these events (anxiety and depression-like behaviors and cognitive deficits) and the contribution of olfactory and auditory stress vs visual reinforcement were not examined. This study demonstrates that some of the features of PTSD-like symptoms in rats are reversible after a significant time lapse of the witnessing of traumatic events. We also have established that witnessing is critical to the PTSD-like phenotype and cannot be acquired solely due to auditory or olfactory stresses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Perceiving temporal regularity in music: The role of auditory event-related potentials (ERPs) in probing beat perception

    NARCIS (Netherlands)

    Honing, H.; Bouwer, F.L.; Háden, G.P.; Merchant, H.; de Lafuente, V.

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in

  18. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  19. Contralateral white noise attenuates 40-Hz auditory steady-state fields but not N100m in auditory evoked fields.

    Science.gov (United States)

    Kawase, Tetsuaki; Maki, Atsuko; Kanno, Akitake; Nakasato, Nobukazu; Sato, Mika; Kobayashi, Toshimitsu

    2012-01-16

    The different response characteristics of the different auditory cortical responses under conventional central masking conditions were examined by comparing the effects of contralateral white noise on the cortical component of 40-Hz auditory steady state fields (ASSFs) and the N100 m component in auditory evoked fields (AEFs) for tone bursts using a helmet-shaped magnetoencephalography system in 8 healthy volunteers (7 males, mean age 32.6 years). The ASSFs were elicited by monaural 1000 Hz amplitude modulation tones at 80 dB SPL, with the amplitude modulated at 39 Hz. The AEFs were elicited by monaural 1000 Hz tone bursts of 60 ms duration (rise and fall times of 10 ms, plateau time of 40 ms) at 80 dB SPL. The results indicated that continuous white noise at 70 dB SPL presented to the contralateral ear did not suppress the N100 m response in either hemisphere, but significantly reduced the amplitude of the 40-Hz ASSF in both hemispheres with asymmetry in that suppression of the 40-Hz ASSF was greater in the right hemisphere. Different effects of contralateral white noise on these two responses may reflect different functional auditory processes in the cortices. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex.

    Science.gov (United States)

    Sussman, Elyse; Steinschneider, Mitchell

    2006-02-23

    Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.

  1. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  2. Auditory mismatch negativity in schizophrenia: topographic evaluation with a high-density recording montage.

    Science.gov (United States)

    Hirayasu, Y; Potts, G F; O'Donnell, B F; Kwon, J S; Arakaki, H; Akdag, S J; Levitt, J J; Shenton, M E; McCarley, R W

    1998-09-01

    The mismatch negativity, a negative component in the auditory event-related potential, is thought to index automatic processes involved in sensory or echoic memory. The authors' goal in this study was to examine the topography of auditory mismatch negativity in schizophrenia with a high-density, 64-channel recording montage. Mismatch negativity topography was evaluated in 23 right-handed male patients with schizophrenia who were receiving medication and in 23 nonschizophrenic comparison subjects who were matched in age, handedness, and parental socioeconomic status. The Positive and Negative Syndrome Scale was used to measure psychiatric symptoms. Mismatch negativity amplitude was reduced in the patients with schizophrenia. They showed a greater left-less-than-right asymmetry than comparison subjects at homotopic electrode pairs near the parietotemporal junction. There were correlations between mismatch negativity amplitude and hallucinations at left frontal electrodes and between mismatch negativity amplitude and passive-apathetic social withdrawal at left and right frontal electrodes. Mismatch negativity was reduced in schizophrenia, especially in the left hemisphere. This finding is consistent with abnormalities of primary or adjacent auditory cortex involved in auditory sensory or echoic memory.

  3. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Using Complex Auditory-Visual Samples to Produce Emergent Relations in Children with Autism

    Science.gov (United States)

    Groskreutz, Nicole C.; Karsina, Allen; Miguel, Caio F.; Groskreutz, Mark P.

    2010-01-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually…

  5. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  6. Mode of recording and modulation frequency effects of auditory steady state response thresholds

    OpenAIRE

    Jalaei, Bahram; Shaabani, Moslem; Zakaria, Mohd Normani

    2017-01-01

    Abstract Introduction The performance of auditory steady state response (ASSR) in threshold testing when recorded ipsilaterally and contralaterally, as well as at low and high modulation frequencies (MFs), has not been systematically studied. Objective To verify the influences of mode of recording (ipsilateral vs. contralateral) and modulation frequency (40 Hz vs. 90 Hz) on ASSR thresholds. Methods Fifteen female and 14 male subjects (aged 18–30 years) with normal hearing bilaterally were ...

  7. Local field potential correlates of auditory working memory in primate dorsal temporal pole.

    Science.gov (United States)

    Bigelow, James; Ng, Chi-Wing; Poremba, Amy

    2016-06-01

    Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special

  8. The auditory comprehension changes over time after sport-related concussion can indicate multisensory processing dysfunctions.

    Science.gov (United States)

    Białuńska, Anita; Salvatore, Anthony P

    2017-12-01

    Although science findings and treatment approaches of a concussion have changed in recent years, there continue to be challenges in understanding the nature of the post-concussion behavior. There is growing a body of evidence that some deficits can be related to an impaired auditory processing. To assess auditory comprehension changes over time following sport-related concussion (SRC) in young athletes. A prospective, repeated measures mixed-design was used. A sample of concussed athletes ( n  = 137) and the control group consisted of age-matched, non-concussed athletes ( n  = 143) were administered Subtest VIII of the Computerized-Revised Token Test (C-RTT). The 88 concussed athletes selected for final analysis (neither previous history of brain injury, neurological, psychiatric problems, nor auditory deficits) were evaluated after injury during three sessions (PC1, PC2, and PC3); controls were tested once. Between- and within-group comparisons using RMANOVA were performed on the C-RTT Efficiency Score (ES). ES of the SRC athletes group improved over consecutive testing sessions ( F  =   14.7, p   2.0, Ps integration and/or motor execution can be compromised after a concussion.

  9. Generation of human auditory steady-state responses (SSRs). II: Addition of responses to individual stimuli.

    Science.gov (United States)

    Santarelli, R; Maurizi, M; Conti, G; Ottaviani, F; Paludetti, G; Pettorossi, V E

    1995-03-01

    In order to investigate the generation of the 40 Hz steady-state response (SSR), auditory potentials evoked by clicks were recorded in 16 healthy subjects in two stimulating conditions. Firstly, repetition rates of 7.9 and 40 Hz were used to obtain individual middle latency responses (MLRs) and 40 Hz-SSRs, respectively. In the second condition, eight click trains were presented at a 40 Hz repetition rate and an inter-train interval of 126 ms. We extracted from the whole train response: (1) the response-segment taking place after the last click of the train (last click response, LCR), (2) a modified LCR (mLCR) obtained by clearing the LCR from the amplitude enhancement due to the overlapping of the responses to the clicks preceding the last within the stimulus train. In comparison to MLRs, the most relevant feature of the evoked activity following the last click of the train (LCRs, mLCRs) was the appearance in the 50-110 ms latency range of one (in 11 subjects) or two (in 2 subjects) additional positive-negative deflections having the same periodicity as that of MLR waves. The grand average (GA) of the 40 Hz-SSRs was compared with three predictions synthesized by superimposing: (1) the GA of MLRs, (2) the GA of LCRs, (3) the GA of mLCRs. Both the MLR and mLCR predictions reproduced the recorded signal in amplitude while the LCR prediction amplitude resulted almost twice that of the 40 Hz-SSR. With regard to the phase, the MLR, LCR and mLCR closely predicted the recorded signal. Our findings confirm the effectiveness of the linear addition mechanism in the generation of the 40 Hz-SSR. However the responses to individual stimuli within the 40 Hz-SSR differ from MLRs because of additional periodic activity. These results suggest that phenomena related to the resonant frequency of the activated system may play a role in the mechanisms which interact to generate the 40 Hz-SSR.

  10. Functional and structural aspects of tinnitus-related enhancement and suppression of auditory cortex activity.

    Science.gov (United States)

    Diesch, Eugen; Andermann, Martin; Flor, Herta; Rupp, Andre

    2010-05-01

    The steady-state auditory evoked magnetic field was recorded in tinnitus patients and controls, both either musicians or non-musicians, all of them with high-frequency hearing loss. Stimuli were AM-tones with two modulation frequencies and three carrier frequencies matching the "audiometric edge", i.e. the frequency above which hearing loss increases more rapidly, the tinnitus frequency or the frequency 1 1/2 octaves above the audiometric edge in controls, and a frequency 1 1/2 octaves below the audiometric edge. Stimuli equated in carrier frequency, but differing in modulation frequency, were simultaneously presented to the two ears. The modulation frequency-specific components of the dual steady-state response were recovered by bandpass filtering. In both hemispheres, the source amplitude of the response was larger for contralateral than ipsilateral input. In non-musicians with tinnitus, this laterality effect was enhanced in the hemisphere contralateral and reduced in the hemisphere ipsilateral to the tinnitus ear, especially for the tinnitus frequency. The hemisphere-by-input laterality dominance effect was smaller in musicians than in non-musicians. In both patient groups, source amplitude change over time, i.e. amplitude slope, was increasing with tonal frequency for contralateral input and decreasing for ipsilateral input. However, slope was smaller for musicians than non-musicians. In patients, source amplitude was negatively correlated with the MRI-determined volume of the medial partition of Heschl's gyrus. Tinnitus patients show an altered excitatory-inhibitory balance reflecting the downregulation of inhibition and resulting in a steeper dominance hierarchy among simultaneous processes in auditory cortex. Direction and extent of this alteration are modulated by musicality and auditory cortex volume. 2010 Elsevier Inc. All rights reserved.

  11. The Impacts of Language Background and Language-Related Disorders in Auditory Processing Assessment

    Science.gov (United States)

    Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Rosen, Stuart

    2013-01-01

    Purpose: To examine the impact of language background and language-related disorders (LRDs--dyslexia and/or language impairment) on performance in English speech and nonspeech tests of auditory processing (AP) commonly used in the clinic. Method: A clinical database concerning 133 multilingual children (mostly with English as an additional…

  12. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  13. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  14. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Neural biomarkers for dyslexia, ADHD and ADD in the auditory cortex of children

    Directory of Open Access Journals (Sweden)

    Bettina Serrallach

    2016-07-01

    Full Text Available Dyslexia, attention deficit hyperactivity disorder (ADHD, and attention deficit disorder (ADD show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N=147 using neuroimaging, magnet-encephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only a clear discrimination between two subtypes of attentional disorders (ADHD and ADD, a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  16. The Physiological Basis and Clinical Use of the Binaural Interaction Component of the Auditory Brainstem Response

    Science.gov (United States)

    Klump, Georg M.; Tollin, Daniel J.

    2016-01-01

    The auditory brainstem response (ABR) is a sound-evoked non-invasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, we discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. We review how inter-aural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized. PMID:27232077

  17. Is the effect of tinnitus on auditory steady-state response amplitude mediated by attention?

    Directory of Open Access Journals (Sweden)

    Eugen eDiesch

    2012-05-01

    Full Text Available Objectives: The amplitude of the auditory steady-state response (ASSR is enhanced in tinnitus. As ASSR ampli¬tude is also enhanced by attention, the effect of tinnitus on ASSR amplitude could be interpreted as an effect of attention mediated by tinnitus. As attention effects on the N1 are signi¬fi¬cantly larger than those on the ASSR, if the effect of tinnitus on ASSR amplitude were due to attention, there should be similar amplitude enhancement effects in tinnitus for the N1 component of the auditory evoked response. Methods: MEG recordings of auditory evoked responses which were previously examined for the ASSR (Diesch et al. 2010 were analysed with respect to the N1m component. Like the ASSR previously, the N1m was analysed in the source domain (source space projection. Stimuli were amplitude-modulated tones with one of three carrier fre¬quen¬cies matching the tinnitus frequency or a surrogate frequency 1½ octaves above the audio¬metric edge frequency in con¬trols, the audiometric edge frequency, and a frequency below the audio¬metric edgeResults: In the earlier ASSR study (Diesch et al., 2010, the ASSR amplitude in tinnitus patients, but not in controls, was significantly larger in the (surrogate tinnitus condition than in the edge condition. In the present study, both tinnitus patients and healthy controls show an N1m-amplitude profile identical to the one of ASSR amplitudes in healthy controls. N1m amplitudes elicited by tonal frequencies located at the audiometric edge and at the (surrogate tinnitus frequency are smaller than N1m amplitudes elicited by sub-edge tones and do not differ among each other.Conclusions: There is no N1-amplitude enhancement effect in tinnitus. The enhancement effect of tinnitus on ASSR amplitude cannot be accounted for in terms of attention induced by tinnitus.

  18. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  19. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  20. Psychometric intelligence and P3 of the event-related potentials studied with a 3-stimulus auditory oddball task

    NARCIS (Netherlands)

    Wronka, E.A.; Kaiser, J.; Coenen, A.M.L.

    2013-01-01

    Relationship between psychometric intelligence measured with Raven's Advanced Progressive Matrices (RAPM) and event-related potentials (ERP) was examined using 3-stimulus oddball task. Subjects who had scored higher on RAPM exhibited larger amplitude of P3a component. Additional analysis using the