WorldWideScience

Sample records for auditory perception event-related

  1. From sensation to percept: the neural signature of auditory event-related potentials.

    Science.gov (United States)

    Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven

    2014-05-01

    An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300). Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Evaluation of auditory perception development in neonates by event-related potential technique.

    Science.gov (United States)

    Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan

    2017-08-01

    To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both Pbrain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both Pdevelopment. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  3. Perceiving temporal regularity in music: The role of auditory event-related potentials (ERPs) in probing beat perception

    NARCIS (Netherlands)

    Honing, H.; Bouwer, F.L.; Háden, G.P.; Merchant, H.; de Lafuente, V.

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in

  4. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  5. Auditory perception and attention as reflected by the brain event-related potentials in children with Asperger syndrome.

    Science.gov (United States)

    Lepistö, T; Silokallio, S; Nieminen-von Wendt, T; Alku, P; Näätänen, R; Kujala, T

    2006-10-01

    Language development is delayed and deviant in individuals with autism, but proceeds quite normally in those with Asperger syndrome (AS). We investigated auditory-discrimination and orienting in children with AS using an event-related potential (ERP) paradigm that was previously applied to children with autism. ERPs were measured to pitch, duration, and phonetic changes in vowels and to corresponding changes in non-speech sounds. Active sound discrimination was evaluated with a sound-identification task. The mismatch negativity (MMN), indexing sound-discrimination accuracy, showed right-hemisphere dominance in the AS group, but not in the controls. Furthermore, the children with AS had diminished MMN-amplitudes and decreased hit rates for duration changes. In contrast, their MMN to speech pitch changes was parietally enhanced. The P3a, reflecting involuntary orienting to changes, was diminished in the children with AS for speech pitch and phoneme changes, but not for the corresponding non-speech changes. The children with AS differ from controls with respect to their sound-discrimination and orienting abilities. The results of the children with AS are relatively similar to those earlier obtained from children with autism using the same paradigm, although these clinical groups differ markedly in their language development.

  6. Auditory event-related potentials associated with perceptual reversals of bistable pitch motion.

    Science.gov (United States)

    Davidson, Gray D; Pitts, Michael A

    2014-01-01

    Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.

  7. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    Science.gov (United States)

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  8. Children's Performance on Pseudoword Repetition Depends on Auditory Trace Quality: Evidence from Event-Related Potentials.

    Science.gov (United States)

    Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto

    1999-01-01

    Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…

  9. Event-related potentials to visual, auditory, and bimodal (combined auditory-visual) stimuli.

    Science.gov (United States)

    Isoğlu-Alkaç, Ummühan; Kedzior, Karina; Keskindemirci, Gonca; Ermutlu, Numan; Karamursel, Sacit

    2007-02-01

    The purpose of this study was to investigate the response properties of event related potentials to unimodal and bimodal stimulations. The amplitudes of N1 and P2 were larger during bimodal evoked potentials (BEPs) than auditory evoked potentials (AEPs) in the anterior sites and the amplitudes of P1 were larger during BEPs than VEPs especially at the parieto-occipital locations. Responses to bimodal stimulation had longer latencies than responses to unimodal stimulation. The N1 and P2 components were larger in amplitude and longer in latency during the bimodal paradigm and predominantly occurred at the anterior sites. Therefore, the current bimodal paradigm can be used to investigate the involvement and location of specific neural generators that contribute to higher processing of sensory information. Moreover, this paradigm may be a useful tool to investigate the level of sensory dysfunctions in clinical samples.

  10. Auditory event-related responses to diphthongs in different attention conditions

    DEFF Research Database (Denmark)

    Morris, David Jackson; Steinmetzger, Kurt; Tøndering, John

    2016-01-01

    The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2 = 1000 Hz) and subtle (F2 = 100 Hz) diphthongs, while subjects...... (i) attended to the auditory stimuli, (ii) ignored the auditory stimuli and watched a film, and (iii) diverted their attention to a visual discrimination task. Responses elicited by diphthongs where F2 values rose and fell were found to be different and this precluded their combined analysis....... Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude...

  11. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    Science.gov (United States)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  12. Auditory event-related potentials in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Tomé, David; Sampaio, Mafalda; Mendes-Ribeiro, José; Barbosa, Fernando; Marques-Teixeira, João

    2014-12-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of idiopathic epilepsy, with onset from age 3 to 14 years. Although the prognosis for children with BECTS is excellent, some studies have revealed neuropsychological deficits in many domains, including language. Auditory event-related potentials (AERPs) reflect activation of different neuronal populations and are suggested to contribute to the evaluation of auditory discrimination (N1), attention allocation and phonological categorization (N2), and echoic memory (mismatch negativity--MMN). The scarce existing literature about this theme motivated the present study, which aims to investigate and document the existing AERP changes in a group of children with BECTS. AERPs were recorded, during the day, to pure and vocal tones and in a conventional auditory oddball paradigm in five children with BECTS (aged 8-12; mean=10 years; male=5) and in six gender and age-matched controls. Results revealed high amplitude of AERPs for the group of children with BECTS with a slight latency delay more pronounced in fronto-central electrodes. Children with BECTS may have abnormal central auditory processing, reflected by electrophysiological measures such as AERPs. In advance, AERPs seem a good tool to detect and reliably reveal cortical excitability in children with typical BECTS. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth

    NARCIS (Netherlands)

    van den Heuvel, Marion I.; Otte, Renee A.; Braeken, Marijke A. K. A.; Winkler, Istvan; Kushnerenko, Elena; Van den Bergh, Bea R. H.

    2015-01-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was

  14. Intelligence and P3 Components of the Event-Related Potential Elicited during an Auditory Discrimination Task with Masking

    Science.gov (United States)

    De Pascalis, V.; Varriale, V.; Matteoli, A.

    2008-01-01

    The relationship between fluid intelligence (indexed by scores on Raven Progressive Matrices) and auditory discrimination ability was examined by recording event-related potentials from 48 women during the performance of an auditory oddball task with backward masking. High ability (HA) subjects exhibited shorter response times, greater response…

  15. A hierarchy of event-related potential markers of auditory processing in disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Steve Beukema

    2016-01-01

    Full Text Available Functional neuroimaging of covert perceptual and cognitive processes can inform the diagnoses and prognoses of patients with disorders of consciousness, such as the vegetative and minimally conscious states (VS;MCS. Here we report an event-related potential (ERP paradigm for detecting a hierarchy of auditory processes in a group of healthy individuals and patients with disorders of consciousness. Simple cortical responses to sounds were observed in all 16 patients; 7/16 (44% patients exhibited markers of the differential processing of speech and noise; and 1 patient produced evidence of the semantic processing of speech (i.e. the N400 effect. In several patients, the level of auditory processing that was evident from ERPs was higher than the abilities that were evident from behavioural assessment, indicating a greater sensitivity of ERPs in some cases. However, there were no differences in auditory processing between VS and MCS patient groups, indicating a lack of diagnostic specificity for this paradigm. Reliably detecting semantic processing by means of the N400 effect in passively listening single-subjects is a challenge. Multiple assessment methods are needed in order to fully characterise the abilities of patients with disorders of consciousness.

  16. Auditory selective attention in adolescents with major depression: An event-related potential study.

    Science.gov (United States)

    Greimel, E; Trinkl, M; Bartling, J; Bakos, S; Grossheinrich, N; Schulte-Körne, G

    2015-02-01

    Major depression (MD) is associated with deficits in selective attention. Previous studies in adults with MD using event-related potentials (ERPs) reported abnormalities in the neurophysiological correlates of auditory selective attention. However, it is yet unclear whether these findings can be generalized to MD in adolescence. Thus, the aim of the present ERP study was to explore the neural mechanisms of auditory selective attention in adolescents with MD. 24 male and female unmedicated adolescents with MD and 21 control subjects were included in the study. ERPs were collected during an auditory oddball paradigm. Depressive adolescents tended to show a longer N100 latency to target and non-target tones. Moreover, MD subjects showed a prolonged latency of the P200 component to targets. Across groups, longer P200 latency was associated with a decreased tendency of disinhibited behavior as assessed by a behavioral questionnaire. To be able to draw more precise conclusions about differences between the neural bases of selective attention in adolescents vs. adults with MD, future studies should include both age groups and apply the same experimental setting across all subjects. The study provides strong support for abnormalities in the neurophysiolgical bases of selective attention in adolecents with MD at early stages of auditory information processing. Absent group differences in later ERP components reflecting voluntary attentional processes stand in contrast to results reported in adults with MD and may suggest that adolescents with MD possess mechanisms to compensate for abnormalities in the early stages of selective attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Event-related delta, theta, alpha and gamma correlates to auditory oddball processing during Vipassana meditation

    Science.gov (United States)

    Delorme, Arnaud; Polich, John

    2013-01-01

    Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958

  18. Nicotine enhances an auditory Event-Related Potential component which is inversely related to habituation.

    Science.gov (United States)

    Veltri, Theresa; Taroyan, Naira; Overton, Paul G

    2017-07-01

    Nicotine is a psychoactive substance that is commonly consumed in the context of music. However, the reason why music and nicotine are co-consumed is uncertain. One possibility is that nicotine affects cognitive processes relevant to aspects of music appreciation in a beneficial way. Here we investigated this possibility using Event-Related Potentials. Participants underwent a simple decision-making task (to maintain attentional focus), responses to which were signalled by auditory stimuli. Unlike previous research looking at the effects of nicotine on auditory processing, we used complex tones that varied in pitch, a fundamental element of music. In addition, unlike most other studies, we tested non-smoking subjects to avoid withdrawal-related complications. We found that nicotine (4.0 mg, administered as gum) increased P2 amplitude in the frontal region. Since a decrease in P2 amplitude and latency is related to habituation processes, and an enhanced ability to disengage from irrelevant stimuli, our findings suggest that nicotine may cause a reduction in habituation, resulting in non-smokers being less able to adapt to repeated stimuli. A corollary of that decrease in adaptation may be that nicotine extends the temporal window during which a listener is able and willing to engage with a piece of music.

  19. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  20. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  1. Event-related brain potential correlates of human auditory sensory memory-trace formation.

    Science.gov (United States)

    Haenschel, Corinna; Vernon, David J; Dwivedi, Prabuddh; Gruzelier, John H; Baldeweg, Torsten

    2005-11-09

    The event-related potential (ERP) component mismatch negativity (MMN) is a neural marker of human echoic memory. MMN is elicited by deviant sounds embedded in a stream of frequent standards, reflecting the deviation from an inferred memory trace of the standard stimulus. The strength of this memory trace is thought to be proportional to the number of repetitions of the standard tone, visible as the progressive enhancement of MMN with number of repetitions (MMN memory-trace effect). However, no direct ERP correlates of the formation of echoic memory traces are currently known. This study set out to investigate changes in ERPs to different numbers of repetitions of standards, delivered in a roving-stimulus paradigm in which the frequency of the standard stimulus changed randomly between stimulus trains. Normal healthy volunteers (n = 40) were engaged in two experimental conditions: during passive listening and while actively discriminating changes in tone frequency. As predicted, MMN increased with increasing number of standards. However, this MMN memory-trace effect was caused mainly by enhancement with stimulus repetition of a slow positive wave from 50 to 250 ms poststimulus in the standard ERP, which is termed here "repetition positivity" (RP). This RP was recorded from frontocentral electrodes when participants were passively listening to or actively discriminating changes in tone frequency. RP may represent a human ERP correlate of rapid and stimulus-specific adaptation, a candidate neuronal mechanism underlying sensory memory formation in the auditory cortex.

  2. EEG Channel Selection Using Particle Swarm Optimization for the Classification of Auditory Event-Related Potentials

    Directory of Open Access Journals (Sweden)

    Alejandro Gonzalez

    2014-01-01

    Full Text Available Brain-machine interfaces (BMI rely on the accurate classification of event-related potentials (ERPs and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy.

  3. Subclinical alexithymia modulates early audio-visual perceptive and attentional event-related potentials

    Directory of Open Access Journals (Sweden)

    Dyna eDelle-Vigne

    2014-03-01

    Full Text Available Introduction:Previous studies have highlighted the advantage of audio–visual oddball tasks (instead of unimodal ones in order to electrophysiologically index subclinical behavioral differences. Since alexithymia is highly prevalent in the general population, we investigated whether the use of various bimodal tasks could elicit emotional effects in low- versus high-alexithymic scorers. Methods:Fifty students (33 females were split into groups based on low and high scores on the Toronto Alexithymia Scale. During event-related potential recordings, they were exposed to three kinds of audio–visual oddball tasks: neutral (geometrical forms and bips, animal (dog and cock with their respective shouts, or emotional (faces and voices stimuli. In each condition, participants were asked to quickly detect deviant events occurring amongst a train of frequent matching stimuli (e.g., push a button when a sad face–voice pair appeared amongst a train of neutral face–voice pairs. P100, N100, and P300 components were analyzed: P100 refers to visual perceptive processing, N100 to auditory ones, and the P300 relates to response-related stages. Results:High-alexithymic scorers presented a particular pattern of results when processing the emotional stimulations, reflected in early ERP components by increased P100 and N100 amplitudes in the emotional oddball tasks (P100: pConclusions:Our findings suggest that high-alexithymic scorers require heightened early attentional resources when confronted with emotional stimuli.

  4. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  5. Saturation of auditory short-term memory causes a plateau in the sustained anterior negativity event-related potential.

    Science.gov (United States)

    Alunni-Menichini, Kristelle; Guimond, Synthia; Bermudez, Patrick; Nolden, Sophie; Lefebvre, Christine; Jolicoeur, Pierre

    2014-12-10

    The maintenance of information in auditory short-term memory (ASTM) is accompanied by a sustained anterior negativity (SAN) in the event-related potential measured during the retention interval of simple auditory memory tasks. Previous work on ASTM showed that the amplitude of the SAN increased in negativity as the number of maintained items increases. The aim of the current study was to measure the SAN and observe its behavior beyond the point of saturation of auditory short-term memory. We used atonal pure tones in sequences of 2, 4, 6, or 8t. Our results showed that the amplitude of SAN increased in negativity from 2 to 4 items and then levelled off from 4 to 8 items. Behavioral results suggested that the average span in the task was slightly below 3, which was consistent with the observed plateau in the electrophysiological results. Furthermore, the amplitude of the SAN predicted individual differences in auditory memory capacity. The results support the hypothesis that the SAN is an electrophysiological index of brain activity specifically related to the maintenance of auditory information in ASTM. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Auditory sensory memory in 2-year-old children: an event-related potential study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-03-26

    Auditory sensory memory is assumed to play an important role in cognitive development, but little is known about it in young children. The aim of this study was to estimate the duration of auditory sensory memory in 2-year-old children. We recorded the mismatch negativity in response to tone stimuli presented with different interstimulus intervals. Our findings suggest that in 2-year-old children the memory representation of the standard tone remains in the sensory memory store for at least 1 s but for less than 2 s. Recording the mismatch negativity with stimuli presented at various interstimulus intervals seems to be a useful method for studying the relationship between auditory sensory memory and normal and disturbed cognitive development.

  7. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  8. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  9. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    Science.gov (United States)

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  10. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth.

    Science.gov (United States)

    van den Heuvel, Marion I; Otte, Renée A; Braeken, Marijke A K A; Winkler, István; Kushnerenko, Elena; Van den Bergh, Bea R H

    2015-07-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was delivered to infants with a frequent repetitive tone and three rare auditory events. The three rare events included a shorter than the regular inter-stimulus interval (ISI-deviant), white noise segments, and environmental sounds. The results suggest that the N250 infantile AERP component emerges during this period in response to white noise but not to environmental sounds, possibly indicating a developmental step towards separating acoustic deviance from contextual novelty. The scalp distribution of the AERP response to both the white noise and the environmental sounds shifted towards frontal areas and AERP peak latencies were overall lower in infants at 4 than at 2 months of age. These observations indicate improvements in the speed of sound processing and maturation of the frontal attentional network in infants during this period. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The origin and nature of categorical perception of colour: Evidence from event-related brain potentials.

    OpenAIRE

    Clifford, Alexandra.

    2009-01-01

    Categorical perception (CP) of colour is demonstrated by faster or more accurate discrimination of colours that cross a category boundary, compared to equivalently spaced colours from the same colour category. Despite a plethora of behavioural research exploring the origin and nature of colour CP, the processes involved in the effect are still unresolved. This thesis investigates the time course and underlying mechanisms of colour CP by using the Event-Related Potential (ERP) technique. This ...

  12. Automatic detection of lexical change: an auditory event-related potential study.

    Science.gov (United States)

    Muller-Gass, Alexandra; Roye, Anja; Kirmse, Ursula; Saupe, Katja; Jacobsen, Thomas; Schröger, Erich

    2007-10-29

    We investigated the detection of rare task-irrelevant changes in the lexical status of speech stimuli. Participants performed a nonlinguistic task on word and pseudoword stimuli that occurred, in separate conditions, rarely or frequently. Task performance for pseudowords was deteriorated relative to words, suggesting unintentional lexical analysis. Furthermore, rare word and pseudoword changes had a similar effect on the event-related potentials, starting as early as 165 ms. This is the first demonstration of the automatic detection of change in lexical status that is not based on a co-occurring acoustic change. We propose that, following lexical analysis of the incoming stimuli, a mental representation of the lexical regularity is formed and used as a template against which lexical change can be detected.

  13. Human event-related brain potentials to auditory periodic noise stimuli.

    Science.gov (United States)

    Kaernbach, C; Schröger, E; Gunter, T C

    1998-02-06

    Periodic noise is perceived as different from ordinary non-repeating noise due to the involvement of echoic memory. Since this stimulus does not contain simple physical cues (such as onsets or spectral shape) that might obscure sensory memory interpretations, it is a valuable tool to study sensory memory functions. We demonstrated for the first time that the processing of periodic noise can be tapped by event-related brain potentials (ERPs). Human subjects received repeating segments of noise embedded in non-repeating noise. They were instructed to detect the periodicity inherent to the stimulation. We observed a central negativity time-locked on the periodic segment that correlated to the subjects behavioral performance in periodicity detection. It is argued that the ERP result indicates an enhancement of sensory-specific processing.

  14. Neural network approach in multichannel auditory event-related potential analysis.

    Science.gov (United States)

    Wu, F Y; Slater, J D; Ramsay, R E

    1994-04-01

    Even though there are presently no clearly defined criteria for the assessment of P300 event-related potential (ERP) abnormality, it is strongly indicated through statistical analysis that such criteria exist for classifying control subjects and patients with diseases resulting in neuropsychological impairment such as multiple sclerosis (MS). We have demonstrated the feasibility of artificial neural network (ANN) methods in classifying ERP waveforms measured at a single channel (Cz) from control subjects and MS patients. In this paper, we report the results of multichannel ERP analysis and a modified network analysis methodology to enhance automation of the classification rule extraction process. The proposed methodology significantly reduces the work of statistical analysis. It also helps to standardize the criteria of P300 ERP assessment and facilitate the computer-aided analysis on neuropsychological functions.

  15. Person perception precedes theory of mind: an event related potential analysis.

    Science.gov (United States)

    Wang, Y W; Lin, C D; Yuan, B; Huang, L; Zhang, W X; Shen, D L

    2010-09-29

    Prior to developing an understanding of another person's mental state, an ability termed "theory of mind" (ToM), a perception of that person's appearance and actions is required. However the relationship between this "person perception" and ToM is unclear. To investigate the time course of ToM and person perception, event-related potentials (ERP) were recorded while 17 normal adults received three kinds of visual stimuli: cartoons involving people (person perception cartoons), cartoons involving people and also requiring ToM for comprehension (ToM cartoons), and scene cartoons. We hypothesized that the respective patterns of brain activation would be different under these three stimuli, at different stages in time. Our findings supported this proposal: the peak amplitudes of P200 for scene cartoons were significantly lower than for person perception or ToM cartoons, while there were no significant differences between the latter two for P200. During the 1000-1300 ms epoch, the mean amplitudes of the late positive components (LPC) for person perception were more positive than for scene representation, while the mean amplitudes of the LPC for ToM were more positive than for person perception. The present study provides preliminary evidence of the neural dynamic that underlies the dissociation between person perception and ToM. Copyright 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. An Auditory Go/No-Go Study of Event-Related Potentials in Children with Fetal Alcohol Spectrum Disorders

    DEFF Research Database (Denmark)

    Steinmann, Tobias P.; Andrew, Colin M.; Thomsen, Carsten E.

    2011-01-01

    Abstract—In this study event-related potentials (ERPs) were used to investigate the effects of prenatal alcohol exposure on response inhibition identified during task performance. ERPs were recorded during a auditory Go/No Go task in two groups of children with mean age of 12:8years (11years to 14......:7years): one diagnosed with fetal alcohol syndrome (FAS) or partial FAS (FAS/PFAS; n = 12) and a control group of children of same age whose mothers abstained from alcohol or drank minimally during pregnancy (n = 11). The children were instructed to push a button in response to the Go stimulus...... trials, suggesting a less efficient early classification of the stimulus. P3 showed larger amplitudes to No-Go vs. Go in both groups. The study has provided new evidence for inhibition deficits in FAS/PFAS subjects identified by ERPs....

  17. Development of auditory event-related potentials in infants prenatally exposed to methadone.

    Science.gov (United States)

    Paul, Jonathan A; Logan, Beth A; Krishnan, Ramesh; Heller, Nicole A; Morrison, Deborah G; Pritham, Ursula A; Tisher, Paul W; Troese, Marcia; Brown, Mark S; Hayes, Marie J

    2014-07-01

    Developmental features of the P2 auditory ERP in a change detection paradigm were examined in infants prenatally exposed to methadone. Opiate dependent pregnant women maintained on methadone replacement therapy were recruited during pregnancy (N = 60). Current and historical alcohol and substance use, SES, and psychiatric status were assessed with a maternal interview during the third trimester. Medical records were used to collect information regarding maternal medications, monthly urinalysis, and breathalyzer to confirm comorbid drug and alcohol exposures. Between birth and 4 months infant ERP change detection performance was evaluated on one occasion with the oddball paradigm (.2 probability oddball) using pure-tone stimuli (standard = 1 kHz and oddball = 2 kHz frequency) at midline electrode sites, Fz, Cz, Pz. Infant groups were examined in the following developmental windows: 4-15, 16-32, or 33-120 days PNA. Older groups showed increased P2 amplitude at Fz and effective change detection performance at P2 not seen in the newborn group. Developmental maturation of amplitude and stimulus discrimination for P2 has been reported in developing infants at all of the ages tested and data reported here in the older infants are consistent with typical development. However, it has been previously reported that the P2 amplitude difference is detectable in neonates; therefore, absence of a difference in P2 amplitude between stimuli in the 4-15 days group may represent impaired ERP performance by neonatal abstinence syndrome or prenatal methadone exposure. © 2013 Wiley Periodicals, Inc.

  18. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children.

    Science.gov (United States)

    Badcock, Nicholas A; Preece, Kathryn A; de Wit, Bianca; Glenn, Katharine; Fieder, Nora; Thie, Johnson; McArthur, Genevieve

    2015-01-01

    Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG) system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs) in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children. Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under "passive" and "active" listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz) and 100 deviant (1,200 Hz) tones. In the active condition, they listened to the same stimuli, and were asked to count the number of 'high' (i.e., deviant) tones. Results. Intraclass correlations (ICCs) indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95) in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74). There were few differences between peak amplitude and latency estimates for the two systems. Conclusions. An adapted EPOC EEG system can be used to index children's late auditory ERP peaks (i.e., P1, N1, P2, N2, P3) and their MMN ERP component.

  19. Changes of auditory event-related potentials in ovariectomized rats injected with d-galactose: Protective role of rosmarinic acid.

    Science.gov (United States)

    Kantar-Gok, Deniz; Hidisoglu, Enis; Er, Hakan; Acun, Alev Duygu; Olgar, Yusuf; Yargıcoglu, Piraye

    2017-09-01

    Rosmarinic acid (RA), which has multiple bioactive properties, might be a useful agent for protecting central nervous system against age related alterations. In this context, the purpose of the present study was to investigate possible protective effects of RA on mismatch negativity (MMN) component of auditory event-related potentials (AERPs) as an indicator of auditory discrimination and echoic memory in the ovariectomized (OVX) rats injected with d-galactose combined with neurochemical and histological analyses. Ninety female Wistar rats were randomly divided into six groups: sham control (S); RA-treated (R); OVX (O); OVX+RA-treated (OR); OVX+d-galactose-treated (OD); OVX+d-galactose+RA-treated (ODR). Eight weeks later, MMN responses were recorded using the oddball condition. An amplitude reduction of some components of AERPs was observed due to ovariectomy with or without d-galactose administiration and these reduction patterns were diverse for different electrode locations. MMN amplitudes were significantly lower over temporal and right frontal locations in the O and OD groups versus the S and R groups, which was accompanied by increased thiobarbituric acid reactive substances (TBARS) and hydroxy-2-nonenal (4-HNE) levels. RA treatment significantly increased AERP/MMN amplitudes and lowered the TBARS/4-HNE levels in the OR and ODR groups versus the O and OD groups, respectively. Our findings support the potential benefit of RA in the prevention of auditory distortion related to the estrogen deficiency and d-galactose administration at least partly by antioxidant actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children

    Directory of Open Access Journals (Sweden)

    Nicholas A. Badcock

    2015-04-01

    Full Text Available Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children.Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under “passive” and “active” listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz and 100 deviant (1,200 Hz tones. In the active condition, they listened to the same stimuli, and were asked to count the number of ‘high’ (i.e., deviant tones.Results. Intraclass correlations (ICCs indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95 in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74. There were few differences between peak amplitude and latency estimates for the two systems.Conclusions. An adapted EPOC EEG system can be used to index children’s late auditory ERP peaks (i.e., P1, N1, P2, N2, P3 and their MMN ERP component.

  1. Development of a Method to Compensate for Signal Quality Variations in Repeated Auditory Event-Related Potential Recordings

    Science.gov (United States)

    Paukkunen, Antti K. O.; Leminen, Miika M.; Sepponen, Raimo

    2010-01-01

    Reliable measurements are mandatory in clinically relevant auditory event-related potential (AERP)-based tools and applications. The comparability of the results gets worse as a result of variations in the remaining measurement error. A potential method is studied that allows optimization of the length of the recording session according to the concurrent quality of the recorded data. In this way, the sufficiency of the trials can be better guaranteed, which enables control of the remaining measurement error. The suggested method is based on monitoring the signal-to-noise ratio (SNR) and remaining measurement error which are compared to predefined threshold values. The SNR test is well defined, but the criterion for the measurement error test still requires further empirical testing in practice. According to the results, the reproducibility of average AERPs in repeated experiments is improved in comparison to a case where the number of recorded trials is constant. The test-retest reliability is not significantly changed on average but the between-subject variation in the value is reduced by 33–35%. The optimization of the number of trials also prevents excessive recordings which might be of practical interest especially in the clinical context. The efficiency of the method may be further increased by implementing online tools that improve data consistency. PMID:20407635

  2. Acute nicotine fails to alter event-related potential or behavioral performance indices of auditory distraction in cigarette smokers.

    Science.gov (United States)

    Knott, Verner J; Scherling, Carole S; Blais, Crystal M; Camarda, Jordan; Fisher, Derek J; Millar, Anne; McIntosh, Judy F

    2006-04-01

    Behavioral studies have shown that nicotine enhances performance in sustained attention tasks, but they have not shown convincing support for the effects of nicotine on tasks requiring selective attention or attentional control under conditions of distraction. We investigated distractibility in 14 smokers (7 females) with event-related brain potentials (ERPs) and behavioral performance measures extracted from an auditory discrimination task requiring a choice reaction time response to short- and long-duration tones, both with and without embedded deviants. Nicotine gum (4 mg), administered in a randomized, double-blind, placebo-controlled crossover design, failed to counter deviant-elicited behavioral distraction (i.e., slower reaction times and increased response errors), and it did not influence the distracter-elicited mismatch negativity, the P300a, or the reorienting negativity ERP components reflecting acoustic change detection, involuntary attentional switching, and attentional reorienting, respectively. Results are discussed in relation to a stimulus-filter model of smoking and in relation to future research directions.

  3. Effects of acute nicotine on event-related potential and performance indices of auditory distraction in nonsmokers.

    Science.gov (United States)

    Knott, Verner J; Bolton, Kiley; Heenan, Adam; Shah, Dhrasti; Fisher, Derek J; Villeneuve, Crystal

    2009-05-01

    Although nicotine has been purported to enhance attentional processes, this has been evidenced mostly in tasks of sustained attention, and its effects on selective attention and attentional control under conditions of distraction are less convincing. This study investigated the effects of nicotine on distractibility in 21 (11 males) nonsmokers with event-related potentials (ERPs) and behavioral performance measures extracted from an auditory discrimination task requiring a choice reaction time response to short- and long-duration tones, with and without imbedded deviants. Administered in a randomized, double-blind, placebo-controlled crossover design, nicotine gum (6 mg) failed to counter deviant-elicited behavioral distraction characterized by longer reaction times and increased response errors. Of the deviant-elicited ERP components, nicotine did not alter the P3a-indexed attentional switching to the deviant, but in females, it tended to diminish the automatic processing of the deviant as shown by a smaller mismatch negativity component, and it attenuated attentional reorienting following deviant-elicited distraction, as reflected by a reduced reorienting negativity ERP component. Results are discussed in relation to attentional models of nicotine and with respect to future research directions.

  4. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials

    Directory of Open Access Journals (Sweden)

    Elena V Orekhova

    2014-02-01

    Full Text Available The extended phenotype of autism spectrum disorders (ASD includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs and magnetic fields (ERFs may help to reveal which processing stages underlying brain response to unattended but salient sensory event are affected in individuals with ASD. Previous research focusing on two sequential stages of the brain response - automatic detection of physical changes in auditory stream, indexed by mismatch negativity (MMN, and evaluation of stimulus novelty, indexed by P3a component, - found in individuals with ASD either increased, decreased or normal processing of deviance and novelty. The review examines these apparently conflicting results, notes gaps in previous findings, and suggests a potentially unifying hypothesis relating the dampened responses to unattended sensory events to the deficit in rapid arousal process. Specifically, ‘sensory gating’ studies focused on pre-attentive arousal consistently demonstrated that brain response to unattended and temporally novel sound in ASD is already affected at around 100 ms after stimulus onset. We hypothesize that abnormalities in nicotinic cholinergic arousal pathways, previously reported in individuals with ASD, may contribute to these ERP/ERF aberrations and result in attention re-orienting deficit. Such cholinergic dysfunction may be present in individuals with ASD early in life and can influence both sensory processing and attention re-orienting behavior. Identification of early neurophysiological biomarkers for cholinergic deficit would help to detect infants at risk who can potentially benefit from particular types of therapies or interventions.

  5. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  6. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    Science.gov (United States)

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Science.gov (United States)

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  8. Hippocampal P3-Like Auditory Event-Related Potentials are Disrupted in a Rat Model of Cholinergic Degeneration in Alzheimer's Disease: Reversal by Donepezil Treatment

    DEFF Research Database (Denmark)

    Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe

    2014-01-01

    P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related...... cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup....

  9. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  10. The Relative Importance of Spatial Versus Temporal Structure in the Perception of Biological Motion: An Event-Related Potential Study

    Science.gov (United States)

    Hirai, Masahiro; Hiraki, Kazuo

    2006-01-01

    We investigated how the spatiotemporal structure of animations of biological motion (BM) affects brain activity. We measured event-related potentials (ERPs) during the perception of BM under four conditions: normal spatial and temporal structure; scrambled spatial and normal temporal structure; normal spatial and scrambled temporal structure; and…

  11. Psychometric intelligence and P3 of the event-related potentials studied with a 3-stimulus auditory oddball task

    NARCIS (Netherlands)

    Wronka, E.A.; Kaiser, J.; Coenen, A.M.L.

    2013-01-01

    Relationship between psychometric intelligence measured with Raven's Advanced Progressive Matrices (RAPM) and event-related potentials (ERP) was examined using 3-stimulus oddball task. Subjects who had scored higher on RAPM exhibited larger amplitude of P3a component. Additional analysis using the

  12. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    Science.gov (United States)

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  13. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    Science.gov (United States)

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  14. A Basic Study on P300 Event-Related Potentials Evoked by Simultaneous Presentation of Visual and Auditory Stimuli for the Communication Interface

    Directory of Open Access Journals (Sweden)

    Masami Hashimoto

    2011-10-01

    Full Text Available We have been engaged in the development of a brain-computer interface (BCI based on the cognitive P300 event-related potentials (ERPs evoked by simultaneous presentation of visual and auditory stimuli in order to assist with the communication in severe physical limitation persons. The purpose of the simultaneous presentation of these stimuli is to give the user more choices as commands. First, we extracted P300 ERPs by either visual oddball paradigm or auditory oddball paradigm. Then amplitude and latency of the P300 ERPs were measured. Second, visual and auditory stimuli were presented simultaneously, we measured the P300 ERPs varying the condition of combinations of these stimuli. In this report, we used 3 colors as visual stimuli and 3 types of MIDI sounds as auditory stimuli. Two types of simultaneous presentations were examined. The one was conducted with random combination. The other was called group stimulation, combining one color, such as red, and one MIDI sound, such as piano, in order to make a group; three groups were made. Each group was presented to users randomly. We evaluated the possibility of BCI using these stimuli from the amplitudes and the latencies of P300 ERPs.

  15. Absence of direction-specific cross-modal visual-auditory adaptation in motion-onset event-related potentials.

    Science.gov (United States)

    Grzeschik, Ramona; Lewald, Jörg; Verhey, Jesko L; Hoffmann, Michael B; Getzmann, Stephan

    2016-01-01

    Adaptation to visual or auditory motion affects within-modality motion processing as reflected by visual or auditory free-field motion-onset evoked potentials (VEPs, AEPs). Here, a visual-auditory motion adaptation paradigm was used to investigate the effect of visual motion adaptation on VEPs and AEPs to leftward motion-onset test stimuli. Effects of visual adaptation to (i) scattered light flashes, and motion in the (ii) same or in the (iii) opposite direction of the test stimulus were compared. For the motion-onset VEPs, i.e. the intra-modal adaptation conditions, direction-specific adaptation was observed--the change-N2 (cN2) and change-P2 (cP2) amplitudes were significantly smaller after motion adaptation in the same than in the opposite direction. For the motion-onset AEPs, i.e. the cross-modal adaptation condition, there was an effect of motion history only in the change-P1 (cP1), and this effect was not direction-specific--cP1 was smaller after scatter than after motion adaptation to either direction. No effects were found for later components of motion-onset AEPs. While the VEP results provided clear evidence for the existence of a direction-specific effect of motion adaptation within the visual modality, the AEP findings suggested merely a motion-related, but not a direction-specific effect. In conclusion, the adaptation of veridical auditory motion detectors by visual motion is not reflected by the AEPs of the present study. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  17. The relationship between event-related potentials, stress perception and personality type in patients with multiple sclerosis without cognitive impairment: A pilot study.

    Science.gov (United States)

    Waliszewska-Prosół, Marta; Nowakowska-Kotas, Marta; Kotas, Roman; Bańkowski, Tomasz; Pokryszko-Dragan, Anna; Podemski, Ryszard

    2018-06-08

    The clinical course of multiple sclerosis (MS) can vary significantly among patients and is affected by exogenous and endogenous factors. Among these, stress and personality type have been gaining more attention. The aim of this study was to investigate the parameters of event-related potentials (ERPs) with regards to stress perception and personality type, as well as cognitive performance in MS patients. The study group consisted of 30 MS patients and 26 healthy controls. Auditory ERPs were performed in both groups, including an analysis of P300 and N200 response parameters. The Perceived Stress Scale (PSS) was used in the MS group to measure the perception of stress. The D-type Scale (DS14) scale was used to determine the features of Type D personality, characterized by social inhibition and negative affectivity. The score on the PSS corresponded with a moderate or high level of stress perception in 63% of MS patients, while 23% of patients presented with a Type D personality. P300 latencies were significantly longer (p = 0.001), N200 amplitudes were significantly higher (p = 0.004), and N200 latencies were longer in MS patients than in the controls. Strong positive correlations were found between N200 and P300 amplitudes, as well as between the DS14 and PSS results. Most MS patients experience moderate to severe stress. ERP abnormalities were found in MS patients who did not have overt cognitive impairment and showed correlations with stress levels and negative affectivity. Event-related potentials may be useful in assessing the influence of stress and emotions on the course of MS.

  18. Comparison of effects of valsartan and amlodipine on cognitive functions and auditory p300 event-related potentials in elderly hypertensive patients.

    Science.gov (United States)

    Katada, Eiichi; Uematsu, Norihiko; Takuma, Yuko; Matsukawa, Noriyuki

    2014-01-01

    We compared the antihypertensive effect of valsartan (VAL) and amlodipine (AML) treatments in elderly hypertensive patients by examining the long-term changes in cognitive function and auditory P300 event-related potentials. We enrolled 20 outpatients, including 12 men and 8 women in the age group of 56 to 81 years who had mild to moderate essential hypertension. The subjects were randomly allocated to receive either 80 mg VAL once a day (10 patients) or 5 mg AML once a day (10 patients). Neuropsychological assessment and auditory P300 event-related potentials were obtained before initiation of VAL or AML treatment and after 6 months of the treatment with VAL or AML. Neuropsychological assessment was evaluated by conducting the Mini-Mental State Examination, the verbal fluency, word-list memory, word-list recall test, word-list recognition, and Trails B tests. Both the groups showed significantly reduced-blood pressure after 6 months of treatment, and the intergroup difference was not significant. The mean baseline Mini-Mental State Examination scores of the VAL and AML groups were not significantly different. Amlodipine treatment did not significantly affect any test score, but VAL treatment significantly increased the word-list memory and word-list recall test scores. Valsartan, and not AML, significantly reduced the mean P300 latency after 6 months. These results suggest that VAL exerts a positive effect on cognitive functions, independent of its antihypertensive effect.

  19. Do event-related potentials to infrequent decrements in duration of auditory stimuli demonstrate a memory trace in man?

    Science.gov (United States)

    Näätänen, R; Paavilainen, P; Reinikainen, K

    1989-12-15

    Sequences of identical acoustic stimuli were presented to normal subjects reading a book while event-related brain potentials (ERP) elicited by these stimuli were recorded. Occasional irrelevant decreases and increases in stimulus duration elicited an ERP component called the mismatch negativity (MMN). This component was larger over the right hemisphere irrespective of the ear stimulated. These data implicate memory representations which develop automatically and represent the physical features of the repetitive stimulus accurately. Further, when an input does not match with such a trace the MMN is generated. The memory traces involved appear to be those of the acoustic sensory memory, the 'echoic' memory.

  20. Do event-related potentials reveal the mechanism of the auditory sensory memory in the human brain?

    Science.gov (United States)

    Näätänen, R; Paavilainen, P; Alho, K; Reinikainen, K; Sams, M

    1989-03-27

    Event-related brain potentials (ERP) to task-irrelevant tone pips presented at short intervals were recorded from the scalp of normal human subjects. Infrequent decrements in stimulus intensity elicited the mismatch negativity (MMN) which was larger in amplitude and shorter in latency the softer the deviant stimulus was. The results obtained imply memory representations which develop automatically and accurately represent the physical features of the repetitive stimulus. These memory traces appear to be those of the acoustic sensory memory, the 'echoic' memory. When an input does not match with such a trace the MMN is generated.

  1. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    Directory of Open Access Journals (Sweden)

    Stefan eBerti

    2013-07-01

    Full Text Available Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection – which does not rely on predictive coding mechanisms – can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection or are presented within an otherwise silent situation (triggering transient detection. In the event-related brain potentials, deviants elicited the mismatch negativity (MMN while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON. This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior.

  2. Effects of twenty-minute 3G mobile phone irradiation on event related potential components and early gamma synchronization in auditory oddball paradigm.

    Science.gov (United States)

    Stefanics, G; Thuróczy, G; Kellényi, L; Hernádi, I

    2008-11-19

    We investigated the potential effects of 20 min irradiation from a new generation Universal Mobile Telecommunication System (UMTS) 3G mobile phone on human event related potentials (ERPs) in an auditory oddball paradigm. In a double-blind task design, subjects were exposed to either genuine or sham irradiation in two separate sessions. Before and after irradiation subjects were presented with a random series of 50 ms tone burst (frequent standards: 1 kHz, P=0.8, rare deviants: 1.5 kHz, P=0.2) at a mean repetition rate of 1500 ms while electroencephalogram (EEG) was recorded. The subjects' task was to silently count the appearance of targets. The amplitude and latency of the N100, N200, P200 and P300 components for targets and standards were analyzed in 29 subjects. We found no significant effects of electromagnetic field (EMF) irradiation on the amplitude and latency of the above ERP components. In order to study possible effects of EMF on attentional processes, we applied a wavelet-based time-frequency method to analyze the early gamma component of brain responses to auditory stimuli. We found that the early evoked gamma activity was insensitive to UMTS RF exposition. Our results support the notion, that a single 20 min irradiation from new generation 3G mobile phones does not induce measurable changes in latency or amplitude of ERP components or in oscillatory gamma-band activity in an auditory oddball paradigm.

  3. Attention, awareness, and the perception of auditory scenes

    Directory of Open Access Journals (Sweden)

    Joel S Snyder

    2012-02-01

    Full Text Available Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.

  4. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials.

    Science.gov (United States)

    Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H

    2013-04-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.

  5. Early Top-Down Influences on Bistable Perception Revealed by Event-Related Potentials

    Science.gov (United States)

    Pitts, Michael A.; Gavin, William J.; Nerger, Janice L.

    2008-01-01

    A longstanding debate exists in the literature concerning bottom-up vs. top-down influences on bistable perception. Recently, a technique has been developed to measure early changes in brain activity (via ERPs) related to perceptual reversals (Kornmeier & Bach, 2004). An ERP component, the reversal negativity (RN) has been identified, and is…

  6. REHABILITATION OF PATIENTS WITH ENCEPHALOPATHY CAUSED BY ACUTE CHEMICAL AGENTS POISONING. P300 OF AUDITORY EVENT RELATED POTENTIALS AND ELECTROENCEPHALOGRAPHY

    Directory of Open Access Journals (Sweden)

    I. U. Berezina

    2014-01-01

    Full Text Available RELEVANCE. Patients with encephalopathy due to acute chemical agents poisoning have some brain functioning changes and a cognitive impairment during the rehabilitation program. These changes require correction of appropriate diagnostic protocol and treatment.AIM. The aim of this study was to estimate changes of electroencephalography (EEG and the P3 component of the event related potential (P300 ERP that are observed in patients with encephalopathy due to acute chemical agents poisoning during stage of rehabilitation.MATERIAL AND METHODS. The study was included 25 patients (age 37 (32; 51 poisoned different kind of neurotoxic substances (drugs, ethanol and complicated by toxic and hypoxic encephalopathy. They have got the treatment of encephalopathy by mexidol intravenously, mesodiencephalic modulation (MDM and hyperbaric oxygen therapy (HBOT. All patients were recoded EEG (electroencephalograph of “MBN” company, Russia and P300 ERP (“Neuron-Spectrum-5/EP” of “Neurosoft”, Russia according to the international recommendations of clinical neurophysiologists. Neuropsychological testing was used for the assessment of cognitive functions.RESULTS. There were some disturbances in primary electroencephalograms of all subjects. The follow-up EEG recording showed the main group of patients who had got the treatment (mexidol, MDM, HBOT had more often (11 patients the EEG improvements compared to the controls (1 patient. The main group had more rarely the EEG impairments compared to the control group. 6 patients of main group and 3 patients of controls did not have EEG changes during the follow-up EEG recordings. All controls and 17 patients of the main group patients had different cognitive disturbances. After the treatment 15 patients of the main group had improved on neuropsychological tests (MMSE, Munsterberg test, Schulte table, Number Connecting Test. They also had a decrease in the N200, P300 peak latency and an increase in the N200, P300

  7. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment

    Directory of Open Access Journals (Sweden)

    Joe Bathelt

    2017-10-01

    Full Text Available Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve. Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320 ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities.

  8. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    Science.gov (United States)

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Adult age differences in visual search from perception to response: Evidence from event-related potentials

    DEFF Research Database (Denmark)

    Wiegand, Iris

    task, in which the singleton target-defining feature (color/shape) varied independently from the response-defining feature (orientation). Slower responses in older participants were associated with age differences in all analyzed ERP components (PCN, SPCN and LRPs), indicating that slowing originated...... at multiple stages from perception to response. Furthermore, we explored the implicit influence of recently encountered information in terms of intertrial effects. ERPs could disentangle that, while automatic processes of perceptual-dimension priming and response priming across trials were preserved, older...

  10. Separate representation of stimulus frequency, intensity, and duration in auditory sensory memory: an event-related potential and dipole-model analysis.

    Science.gov (United States)

    Giard, M H; Lavikahen, J; Reinikainen, K; Perrin, F; Bertrand, O; Pernier, J; Näätänen, R

    1995-01-01

    Abstract The present study analyzed the neural correlates of acoustic stimulus representation in echoic sensory memory. The neural traces of auditory sensory memory were indirectly studied by using the mismatch negativity (MMN), an event-related potential component elicited by a change in a repetitive sound. The MMN is assumed to reflect change detection in a comparison process between the sensory input from a deviant stimulus and the neural representation of repetitive stimuli in echoic memory. The scalp topographies of the MMNs elicited by pure tones deviating from standard tones by either frequency, intensity, or duration varied according to the type of stimulus deviance, indicating that the MMNs for different attributes originate, at least in part, from distinct neural populations in the auditory cortex. This result was supported by dipole-model analysis. If the MMN generator process occurs where the stimulus information is stored, these findings strongly suggest that the frequency, intensity, and duration of acoustic stimuli have a separate neural representation in sensory memory.

  11. Differential activity in left inferior frontal gyrus for pseudowords and real words: an event-related fMRI study on auditory lexical decision.

    Science.gov (United States)

    Xiao, Zhuangwei; Zhang, John X; Wang, Xiaoyi; Wu, Renhua; Hu, Xiaoping; Weng, Xuchu; Tan, Li Hai

    2005-06-01

    After Newman and Twieg and others, we used a fast event-related functional magnetic resonance imaging (fMRI) design and contrasted the lexical processing of pseudowords and real words. Participants carried out an auditory lexical decision task on a list of randomly intermixed real and pseudo Chinese two-character (or two-syllable) words. The pseudowords were constructed by recombining constituent characters of the real words to control for sublexical code properties. Processing of pseudowords and real words activated a highly comparable network of brain regions, including bilateral inferior frontal gyrus, superior, middle temporal gyrus, calcarine and lingual gyrus, and left supramarginal gyrus. Mirroring a behavioral lexical effect, left inferior frontal gyrus (IFG) was significantly more activated for pseudowords than for real words. This result disconfirms a popular view that this area plays a role in grapheme-to-phoneme conversion, as such a conversion process was unnecessary in our task with auditory stimulus presentation. An alternative view was supported that attributes increased activity in left IFG for pseudowords to general processes in decision making, specifically in making positive versus negative responses. Activation in left supramarginal gyrus was of a much larger volume for real words than for pseudowords, suggesting a role of this region in the representation of phonological or semantic information for two-character Chinese words at the lexical level.

  12. Phantom auditory perception (tinnitus): mechanisms of generation and perception.

    Science.gov (United States)

    Jastreboff, P J

    1990-08-01

    Phantom auditory perception--tinnitus--is a symptom of many pathologies. Although there are a number of theories postulating certain mechanisms of its generation, none have been proven yet. This paper analyses the phenomenon of tinnitus from the point of view of general neurophysiology. Existing theories and their extrapolation are presented, together with some new potential mechanisms of tinnitus generation, encompassing the involvement of calcium and calcium channels in cochlear function, with implications for malfunction and aging of the auditory and vestibular systems. It is hypothesized that most tinnitus results from the perception of abnormal activity, defined as activity which cannot be induced by any combination of external sounds. Moreover, it is hypothesized that signal recognition and classification circuits, working on holographic or neuronal network-like representation, are involved in the perception of tinnitus and are subject to plastic modification. Furthermore, it is proposed that all levels of the nervous system, to varying degrees, are involved in tinnitus manifestation. These concepts are used to unravel the inexplicable, unique features of tinnitus and its masking. Some clinical implications of these theories are suggested.

  13. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  14. Hippocampal P3-like auditory event-related potentials are disrupted in a rat model of cholinergic degeneration in Alzheimer's disease: reversal by donepezil treatment.

    Science.gov (United States)

    Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe; Bastlund, Jesper Frank

    2014-01-01

    P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related to several psychiatric and neurological diseases, e.g., Alzheimer's disease (AD). However, only a very limited number of rodent studies have addressed the back-translational validity of the P3-like ERPs as suitable markers of cognition. Thus, the potential of rodent P3-like ERPs to predict pro-cognitive effects in humans remains to be fully validated. The current study characterizes P3-like ERPs in the 192-IgG-SAP (SAP) rat model of the cholinergic degeneration associated with AD. Following training in a combined auditory oddball and lever-press setup, rats were subjected to bilateral intracerebroventricular infusion of 1.25 μg SAP or PBS (sham lesion) and recording electrodes were implanted in hippocampal CA1. Relative to sham-lesioned rats, SAP-lesioned rats had significantly reduced amplitude of P3-like ERPs. P3 amplitude was significantly increased in SAP-treated rats following pre-treatment with 1 mg/kg donepezil. Infusion of SAP reduced the hippocampal choline acetyltransferase activity by 75%. Behaviorally defined cognitive performance was comparable between treatment groups. The present study suggests that AD-like deficits in P3-like ERPs may be mimicked by the basal forebrain cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup.

  15. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  16. Event-related potentials reveal linguistic suppression effect but not enhancement effect on categorical perception of color.

    Science.gov (United States)

    Lu, Aitao; Yang, Ling; Yu, Yanping; Zhang, Meichao; Shao, Yulan; Zhang, Honghong

    2014-08-01

    The present study used the event-related potential technique to investigate the nature of linguistic effect on color perception. Four types of stimuli based on hue differences between a target color and a preceding color were used: zero hue step within-category color (0-WC); one hue step within-category color (1-WC); one hue step between-category color (1-BC); and two hue step between-category color (2-BC). The ERP results showed no significant effect of stimulus type in the 100-200 ms time window. However, in the 200-350 ms time window, ERP responses to 1-WC target color overlapped with that to 0-WC target color for right visual field (RVF) but not left visual field (LVF) presentation. For the 1-BC condition, ERP amplitudes were comparable in the two visual fields, both being significantly different from the 0-WC condition. The 2-BC condition showed the same pattern as the 1-BC condition. These results suggest that the categorical perception of color in RVF is due to linguistic suppression on within-category color discrimination but not between-category color enhancement, and that the effect is independent of early perceptual processes. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  17. Differential activity in left inferior frontal gyrus for pseudo and real words: an event-related functional MRI study on auditory lexical decision

    International Nuclear Information System (INIS)

    Xiao Zhuangwei; Xu Weixiong; Zhang Xuexin; Wang Xiaoyi; Weng Xuchu; Wu Renhua; Wu Xiaoping

    2006-01-01

    Objective: To study lexical processing of pseudo words and real words by using a fast event-related functional MRI (ER-fMRI) design. Methods: Participants did an auditory lexical decision task on a list of pseudo-randomly intermixed real and pseudo Chinese two-character (or two-syllable) words. Pseudo words were constructed by recombining constituent characters of the real words to control for sublexical codes properties. Results: The behavioral performance of fourteen participants indicated that response to pseudowords was significantly slower and less accurate than to real words (mean error rate: 9.9% versus 3.9%, mean reaction time: 1618 ms versus 1143 ms). Processing of pseudo words and real words activated a highly comparable network of brain regions, including bilateral inferior frontal gyrus, superior, middle temporal gyrus, calcarine and lingual gyrus, and left supramarginal gyrus. Mirroring a behavioral lexical effect, left inferior frontal gyrus (IFG) was significantly more activated for pseudo words than for real words. Conclusion: The results indicate that the processing of left inferior frontal gyrus in judging pseudo words and real words is not related to grapheme-to-phoneme conversion, but rather to making positive versus negative responses in decision making. (authors)

  18. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  19. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  20. The effect of phasic auditory alerting on visual perception

    DEFF Research Database (Denmark)

    Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus

    2017-01-01

    /no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen’s Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects...

  1. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  2. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  3. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  4. Clinical study on the value of combining neuropsychological tests with auditory event-related potential P300 for cognitive assessment in elderly patients with cerebral small vessel disease

    Directory of Open Access Journals (Sweden)

    Xiao-ling ZHAO

    2016-11-01

    Full Text Available Objective To investigate the value of combining neuropsychological tests with auditory event-related potential (ERP P300 for cognitive assessment in elderly patients with cerebral small vessel disease (cSVD.  Methods A total of 183 elderly patients with cSVD were enrolled in this study. They were divided into 3 groups according to brain MRI: lacunar infarct (LACI group (N = 62, white matter hyperintensity (WMH group (N = 60 and LACI + WMH group (N = 61. A total of 50 brain MRI normal persons were selected as control group. Montreal Cognitive Assessment (MoCA, Chinese version was used to evaluate the cognitive function, and the amplitude and latency of P300 were measured in each group.  Results Compared with control group, the MoCA total score in LACI, WMH and LACI + WMH groups were significantly lower (P = 0.042, 0.015, 0.000, and the score in LACI + WMH group was significantly lower than that in LACI and WMH groups (P = 0.001, 0.042. In the eight cognitive domains of MoCA scale, the visual space and executive function (P = 0.006, 0.041, 0.035, delayed memory (P = 0.006, 0.012, 0.048, language (P = 0.001, 0.032, 0.047 and calculation (P = 0.009, 0.001, 0.003 in LACI + WMH group were significantly lower than those in control, LACI and WMH groups. The delayed memory in LACI group was significantly lower than that in control group (P = 0.037. The delayed memory (P = 0.005 and language (P = 0.047 in WMH group were significantly lower than those in control group. Compared with control group, the amplitudes of P300 (P = 0.025, 0.033, 0.000 in LACI, WMH and LACI + WMH groups were significantly decreased, and the latencies (P = 0.018, 0.000, 0.000 were significantly prolonged. The amplitude of P300 in LACI + WMH group was significantly lower than that in LACI and WMH groups (P = 0.041, 0.018, and the latency was significantly prolonged (P = 0.000, 0.022.  Conclusions Elderly patients of cSVD all suffer from different degrees of cognitive impairment

  5. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Neural Correlates of Realistic and Unrealistic Auditory Space Perception

    Directory of Open Access Journals (Sweden)

    Akiko Callan

    2011-10-01

    Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.

  7. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  8. The effects of interstimulus interval on event-related indices of attention: an auditory selective attention test of perceptual load theory.

    Science.gov (United States)

    Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter

    2008-03-01

    We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.

  9. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  10. The effect of phasic auditory alerting on visual perception.

    Science.gov (United States)

    Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas

    2017-08-01

    Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Race perception and gaze direction differently impair visual working memory for faces: An event-related potential study.

    Science.gov (United States)

    Sessa, Paola; Dalmaso, Mario

    2016-01-01

    Humans are amazingly experts at processing and recognizing faces, however there are moderating factors of this ability. In the present study, we used the event-related potential technique to investigate the influence of both race and gaze direction on visual working memory (i.e., VWM) face representations. In a change detection task, we orthogonally manipulated race (own-race vs. other-race faces) and eye-gaze direction (direct gaze vs. averted gaze). Participants were required to encode identities of these faces. We quantified the amount of information encoded in VWM by monitoring the amplitude of the sustained posterior contralateral negativity (SPCN) time-locked to the faces. Notably, race and eye-gaze direction differently modulated SPCN amplitude such that other-race faces elicited reduced SPCN amplitudes compared with own-race faces only when displaying a direct gaze. On the other hand, faces displaying averted gaze, independently of their race, elicited increased SPCN amplitudes compared with faces displaying direct gaze. We interpret these findings as denoting that race and eye-gaze direction affect different face processing stages.

  12. Classification of underwater target echoes based on auditory perception characteristics

    Science.gov (United States)

    Li, Xiukun; Meng, Xiangxia; Liu, Hang; Liu, Mingye

    2014-06-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  13. The influence of auditory and visual information on the perception of crispy food

    NARCIS (Netherlands)

    Pocztaruk, R.D.; Abbink, J.H.; Wijk, de R.A.; Frasca, L.C.D.; Gaviao, M.B.D.; Bilt, van de A.

    2011-01-01

    The influence of auditory and/or visual information on the perception of crispy food and on the physiology of chewing was investigated. Participants chewed biscuits of three different levels of crispness under four experimental conditions: no masking, auditory masking, visual masking, and auditory

  14. Event-related potentials in auditory backward recognition masking: a new way to study the neurophysiological basis of sensory memory in humans.

    Science.gov (United States)

    Winkler, I; Näätänen, R

    1992-06-22

    Task-irrelevant pairs of short tones were presented to healthy human subjects while electric potentials were recorded from their scalp ('event-related brain potential', ERP). Infrequent increments in the frequency of the first tone of the repetitive tone-pair elicited an extra ERP component termed 'mismatch negativity' (MMN) when the silent interval between the first and second tone of the pair ('inter-tone interval') was long (150, 300, or 400 ms) but not when this interval was short (20 or 50 ms). This effect did not depend on whether the two tones of the tone-pair were presented to the same or to different ears. The present inter-tone interval effect is consistent with the effects of backward-masking on recognition performance in audition, suggesting that the MMN reflects the neurophysiological basis of echoic memory.

  15. Relating binaural pitch perception to the individual listener's auditory profile.

    Science.gov (United States)

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  16. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object.

    Science.gov (United States)

    Smith, Nicholas A; Folland, Nicole A; Martinez, Diana M; Trainor, Laurel J

    2017-07-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain, Theunissen, Chevalier, Batty, & Taylor, 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Brain dynamics that correlate with effects of learning on auditory distance perception

    Directory of Open Access Journals (Sweden)

    Matthew G. Wisniewski

    2014-12-01

    Full Text Available Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m and far (30-m distances. Listeners’ accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC processes identified in electroencephalographic (EEG data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS. The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  18. Pleasant and unpleasant odour-face combinations influence face and odour perception: An event-related potential study.

    Science.gov (United States)

    Cook, Stephanie; Kokmotou, Katerina; Soto, Vicente; Fallon, Nicholas; Tyson-Carr, John; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej

    2017-08-30

    Odours alter evaluations of concurrent visual stimuli. However, neural mechanisms underlying the effects of congruent and incongruent odours on facial expression perception are not clear. Moreover, the influence of emotional faces on odour perception is not established. We investigated the effects of one pleasant and one unpleasant odour paired with happy and disgusted faces, on subjective ratings and ERP responses to faces. Participants rated the pleasantness of happy and disgusted faces that appeared during 3s pleasant or unpleasant odour pulses, or without odour. Odour pleasantness and intensity ratings were recorded in each trial. EEG was recorded continuously using a 128-channel system. Happy and disgusted faces paired with pleasant and unpleasant odour were rated as more or less pleasant, respectively, compared to the same faces presented in the other odour conditions. Odours were rated as more pleasant when paired with happy faces, and unpleasant odour was rated more intense when paired with disgusted faces. Unpleasant odour paired with disgusted faces also decreased inspiration. Odour-face interactions were evident in the N200 and N400 components. Our results reveal bi-directional effects of odours and faces, and suggest that odour-face interactions may be represented in ERP components. Pairings of unpleasant odour and disgusted faces resulted in stronger hedonic ratings, ERP changes, increased odour intensity ratings and respiratory adjustment. This finding likely represents heightened adaptive responses to multimodal unpleasant stimuli, prompting appropriate behaviour in the presence of danger. Copyright © 2017. Published by Elsevier B.V.

  19. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  20. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  1. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Modulation of Illusory Auditory Perception by Transcranial Electrical Stimulation

    Directory of Open Access Journals (Sweden)

    Giulia Prete

    2017-06-01

    Full Text Available The aim of the present study was to test whether transcranial electrical stimulation can modulate illusory perception in the auditory domain. In two separate experiments we applied transcranial Direct Current Stimulation (anodal/cathodal tDCS, 2 mA; N = 60 and high-frequency transcranial Random Noise Stimulation (hf-tRNS, 1.5 mA, offset 0; N = 45 on the temporal cortex during the presentation of the stimuli eliciting the Deutsch's illusion. The illusion arises when two sine tones spaced one octave apart (400 and 800 Hz are presented dichotically in alternation, one in the left and the other in the right ear, so that when the right ear receives the high tone, the left ear receives the low tone, and vice versa. The majority of the population perceives one high-pitched tone in one ear alternating with one low-pitched tone in the other ear. The results revealed that neither anodal nor cathodal tDCS applied over the left/right temporal cortex modulated the perception of the illusion, whereas hf-tRNS applied bilaterally on the temporal cortex reduced the number of times the sequence of sounds is perceived as the Deutsch's illusion with respect to the sham control condition. The stimulation time before the beginning of the task (5 or 15 min did not influence the perceptual outcome. In accordance with previous findings, we conclude that hf-tRNS can modulate auditory perception more efficiently than tDCS.

  3. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  4. Evaluating the loudness of phantom auditory perception (tinnitus) in rats.

    Science.gov (United States)

    Jastreboff, P J; Brennan, J F

    1994-01-01

    Using our behavioral paradigm for evaluating tinnitus, the loudness of salicylate-induced tinnitus was evaluated in 144 rats by comparing their behavioral responses induced by different doses of salicylate to those induced by different intensities of a continuous reference tone mimicking tinnitus. Group differences in resistance to extinction were linearly related to salicylate dose and, at moderate intensities, to the reference tone as well. Comparison of regression equations for salicylate versus tone effects permitted estimation of the loudness of salicylate-induced tinnitus. These results extend the animal model of tinnitus and provide evidence that the loudness of phantom auditory perception is expressed through observable behavior, can be evaluated, and its changes detected.

  5. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  6. Auditory object perception: A neurobiological model and prospective review.

    Science.gov (United States)

    Brefczynski-Lewis, Julie A; Lewis, James W

    2017-10-01

    Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic

  7. Auditory deficits in amusia extend beyond poor pitch perception.

    Science.gov (United States)

    Whiteford, Kelly L; Oxenham, Andrew J

    2017-05-01

    Congenital amusia is a music perception disorder believed to reflect a deficit in fine-grained pitch perception and/or short-term or working memory for pitch. Because most measures of pitch perception include memory and segmentation components, it has been difficult to determine the true extent of pitch processing deficits in amusia. It is also unclear whether pitch deficits persist at frequencies beyond the range of musical pitch. To address these questions, experiments were conducted with amusics and matched controls, manipulating both the stimuli and the task demands. First, we assessed pitch discrimination at low (500Hz and 2000Hz) and high (8000Hz) frequencies using a three-interval forced-choice task. Amusics exhibited deficits even at the highest frequency, which lies beyond the existence region of musical pitch. Next, we assessed the extent to which frequency coding deficits persist in one- and two-interval frequency-modulation (FM) and amplitude-modulation (AM) detection tasks at 500Hz at slow (f m =4Hz) and fast (f m =20Hz) modulation rates. Amusics still exhibited deficits in one-interval FM detection tasks that should not involve memory or segmentation. Surprisingly, amusics were also impaired on AM detection, which should not involve pitch processing. Finally, direct comparisons between the detection of continuous and discrete FM demonstrated that amusics suffer deficits in both coding and segmenting pitch information. Our results reveal auditory deficits in amusia extending beyond pitch perception that are subtle when controlling for memory and segmentation, and are likely exacerbated in more complex contexts such as musical listening. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Effect of Size Change and Brightness Change of Visual Stimuli on Loudness Perception and Pitch Perception of Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Syouya Tanabe

    2011-10-01

    Full Text Available People obtain a lot of information from visual and auditory sensation on daily life. Regarding the effect of visual stimuli on perception of auditory stimuli, studies of phonological perception and sound localization have been made in great numbers. This study examined the effect of visual stimuli on perception in loudness and pitch of auditory stimuli. We used the image of figures whose size or brightness was changed as visual stimuli, and the sound of pure tone whose loudness or pitch was changed as auditory stimuli. Those visual and auditory stimuli were combined independently to make four types of audio-visual multisensory stimuli for psychophysical experiments. In the experiments, participants judged change in loudness or pitch of auditory stimuli, while they judged the direction of size change or the kind of a presented figure in visual stimuli. Therefore they cannot neglect visual stimuli while they judged auditory stimuli. As a result, perception in loudness and pitch were promoted significantly around their difference limen, when the image was getting bigger or brighter, compared with the case in which the image had no changes. This indicates that perception in loudness and pitch were affected by change in size and brightness of visual stimuli.

  9. Neural correlates of opposing effects of emotional distraction on perception and episodic memory: An event-related fMRI investigation

    Directory of Open Access Journals (Sweden)

    Andrea Taylor Shafer

    2012-09-01

    Full Text Available A main question in emotion and memory literature concerns the relationship between the immediate impact of emotional distraction on perception and the long-term impact of emotion on memory. While previous research shows both automatic and resource-mediated mechanisms to be involved in initial emotion processing and memory, it remains unclear what the exact relationship between the immediate and long-term effects is, and how this relationship may change as a function of manipulations at perception favoring the engagement of either more automatic or mediated mechanisms. Using event-related fMRI, we varied the degree of resource availability for processing task-irrelevant emotional information, to determine how the initial (impairing impact of emotional distraction related to the long-term (enhancing impact of emotion on memory. Results showed that a direct relationship between emotional distraction and memory was dependent on automatic mechanisms, as this was found only under conditions of limited resource availability and engagement of amygdala (AMY-hippocampal (HC mechanisms to both impairing and enhancing effects. A hemispheric disassociation was also identified in AMY-HC, where while both sides were associated with emotional distraction and left AMY and anterior HC were linked to emotional memory, functional asymmetry was only identified in the posterior HC, with only the left side contributing to emotional memory. Finally, areas dissociating between the two opposing effects included the medial frontal, precentral, superior temporal, and middle occipital gyri (linked to emotional distraction, and the superior parietal cortex (linked to emotional memory. These findings demonstrate the relationship between emotional distraction and memory is context dependent and that specific brain regions may be more or less susceptible to the direction of emotional modulation (increased or decreased, depending on the task manipulation and processes

  10. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  11. The Perception of Cooperativeness Without Any Visual or Auditory Communication.

    Science.gov (United States)

    Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan

    2015-12-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.

  12. The Perception of Cooperativeness Without Any Visual or Auditory Communication

    Directory of Open Access Journals (Sweden)

    Dong-Seon Chang

    2015-12-01

    Full Text Available Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.

  13. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  15. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  16. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effo...

  17. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  18. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Science.gov (United States)

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  19. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  20. The Phonotactic Influence on the Perception of a Consonant Cluster /pt/ by Native English and Native Polish Listeners: A Behavioral and Event Related Potential (ERP) Study

    Science.gov (United States)

    Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell

    2012-01-01

    The effect of exposure to the contextual features of the /pt/ cluster was investigated in native-English and native-Polish listeners using behavioral and event-related potential (ERP) methodology. Both groups experience the /pt/ cluster in their languages, but only the Polish group experiences the cluster in the context of word onset examined in…

  1. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  2. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  3. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. An integrative model of auditory phantom perception: tinnitus as a unified percept of interacting separable subnetworks.

    Science.gov (United States)

    De Ridder, Dirk; Vanneste, Sven; Weisz, Nathan; Londero, Alain; Schlee, Winnie; Elgoyhen, Ana Belen; Langguth, Berthold

    2014-07-01

    Tinnitus is a considered to be an auditory phantom phenomenon, a persistent conscious percept of a salient memory trace, externally attributed, in the absence of a sound source. It is perceived as a phenomenological unified coherent percept, binding multiple separable clinical characteristics, such as its loudness, the sidedness, the type (pure tone, noise), the associated distress and so on. A theoretical pathophysiological framework capable of explaining all these aspects in one model is highly needed. The model must incorporate both the deafferentation based neurophysiological models and the dysfunctional noise canceling model, and propose a 'tinnitus core' subnetwork. The tinnitus core can be defined as the minimal set of brain areas that needs to be jointly activated (=subnetwork) for tinnitus to be consciously perceived, devoid of its affective components. The brain areas involved in the other separable characteristics of tinnitus can be retrieved by studies on spontaneous resting state magnetic and electrical activity in people with tinnitus, evaluated for the specific aspect investigated and controlled for other factors. By combining these functional imaging studies with neuromodulation techniques some of the correlations are turned into causal relationships. Thereof, a heuristic pathophysiological framework is constructed, integrating the tinnitus perceptual core with the other tinnitus related aspects. This phenomenological unified percept of tinnitus can be considered an emergent property of multiple, parallel, dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. Communication between these different subnetworks is proposed to occur at hubs, brain areas that are involved in multiple subnetworks simultaneously. These hubs can take part in each separable subnetwork at different frequencies. Communication between the subnetworks is proposed to occur at

  5. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  6. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  7. A loudspeaker-based room auralization system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2009-01-01

    Most research on basic auditory function has been conducted in anechoic or almost anechoic environments. The knowledge derived from these experiments cannot directly be transferred to reverberant environments. In order to investigate the auditory signal processing of reverberant sounds....... This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception......) and measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....

  8. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  9. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  10. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  11. The Influence of Presentation Method on Auditory Length Perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...

  12. The influence of presentation method on auditory length perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...

  13. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  14. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  15. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Directory of Open Access Journals (Sweden)

    Martina Wengenroth

    Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  16. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    Science.gov (United States)

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  17. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  18. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  19. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  20. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  1. Percepts, not acoustic properties, are the units of auditory short-term memory.

    Science.gov (United States)

    Mathias, Samuel R; von Kriegstein, Katharina

    2014-04-01

    For decades, researchers have sought to understand the organizing principles of auditory and visual short-term memory (STM). Previous work in audition has suggested that there are independent memory stores for different sound features, but the nature of the representations retained within these stores is currently unclear. Do they retain perceptual features, or do they instead retain representations of the sound's specific acoustic properties? In the present study we addressed this question by measuring listeners' abilities to keep one of three acoustic properties (interaural time difference [ITD], interaural level difference [ILD], or frequency) in memory when the target sound was followed by interfering sounds that varied randomly in one of the same properties. Critically, ITD and ILD evoked the same percept (spatial location), despite being acoustically different and having different physiological correlates, whereas frequency evoked a different percept (pitch). The results showed that listeners found it difficult to remember the percept of spatial location when the interfering tones varied either in ITD or ILD, but not when they varied in frequency. The study demonstrates that percepts are the units of auditory STM, and provides testable predictions for future neuroscientific work on both auditory and visual STM.

  2. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  3. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  4. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    Directory of Open Access Journals (Sweden)

    Léo Varnet

    Full Text Available Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  5. Neural correlates of face and object perception in an awake chimpanzee (Pan troglodytes examined by scalp-surface event-related potentials.

    Directory of Open Access Journals (Sweden)

    Hirokata Fukushima

    Full Text Available BACKGROUND: The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. METHODOLOGY/PRINCIPAL FINDINGS: In the present report, skin-surface event-related brain potentials (ERPs were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150-200 ms in either experiment. CONCLUSIONS/SIGNIFICANCE: Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species.

  6. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  7. [The comparative analysis of changes of short pieces of EEG at perception of music on the basis of the event-related synchronization/desynchronization and wavelet-synchrony].

    Science.gov (United States)

    Oknina, L B; Kuptsova, S V; Romanov, A S; Masherov, E L; Kuznetsova, O A; Sharova, E V

    2012-01-01

    The going of present pilot study is an analysis of features changes of EEG short pieces registered from 32 sites, at perception of musical melodies healthy examinees depending on logic (cognizance) and emotional (it was pleasant it was not pleasant) melody estimations. For this purpose changes of event-related synchronization/desynchronization, and also wavelet-synchrony of EEG-responses at 31 healthy examinees at the age from 18 till 60 years were compared. It is shown that at a logic estimation of music the melody cognizance is accompanied the event-related desynchronization in the left fronto-parietal-temporal area. At an emotional estimation of a melody the event-related synchronization in left fronto - temporal area for the pleasant melodies, desynchronization in temporal area for not pleasant and desynchronization in occipital area for the melodies which are not causing the emotional response is typical. At the analysis of wavelet-synchrony of EEG characterizing jet changes of interaction of cortical zones, it is revealed that the most distinct topographical distinctions concern type of processing of the heard music: logic (has learned-hasn't learned) or emotional (it was pleasant-it was not pleasant). If at an emotional estimation changes interhemispheric communications between associative cortical zones (central, frontal, temporal), are more expressed at logic - between inter - and intrahemispheric communications of projective zones of the acoustic analyzer (temporal area). It is supposed that the revealed event-related synchronization/desynhronization reflects, most likely, an activation component of an estimation of musical fragments whereas the wavelet-analysis provides guidance on character of processing of musical stimulus.

  8. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  9. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing

    OpenAIRE

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise rati...

  10. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    that the use of spectral cues remained plausible. Simulations of auditory-nerve representations of the complex tones further suggested that a spectrotemporal mechanism combining precise timing information across auditory channels might best account for the behavioral data. Overall, this work provides insights...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...... of reduced frequency selectivity on binaural pitch perception, the ability to accurately process the temporal fine structure (TFS) of sounds at the output of the cochlear filters was found to be essential for perceiving binaural pitch. Monaural TFS processing also played a major and independent role...

  11. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Karsten eSpecht

    2013-01-01

    Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus...

  12. Mapping a lateralization gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus....

  13. Relating binaural pitch perception to the individual listener's auditory profile

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization. (C) 2012 Acoustical Society of America. [http...

  14. Auditory contributions to flavour perception and feeding behaviour.

    Science.gov (United States)

    Spence, Charles

    2012-11-05

    This article reviews the research that has looked at the role of audition in both flavour perception and feeding behaviour in humans. The article starts by looking at early research that focused on the effect of background noise on the sensory-discriminative aspects of taste/flavour perception and on people's hedonic responses to food and beverage items. Next, I move on to look at the role of the sound made by the food (or beverage) itself. Additionally, recent studies that have started to assess the impact of food and beverage packaging sounds, not to mention food preparation sounds, on people's sensory-discriminative and hedonic responses to a variety of food and beverage products are discussed. Finally, the literature on the effect of background music and/or soundscapes on food and beverage perception/consumption are reviewed briefly. Taken together, this body of research, spanning both highly-controlled laboratory experiments and more ecologically-valid field studies, clearly demonstrates that what the consumer hears, be it the sound of the food, the sound of the packaging, the sound of the machine used to prepare that food or beverage (e.g., as in the case of the sound of a coffee machine), and even the sound of the environment in which the consumer happens to be eating and drinking can all exert a profound, if often unacknowledged, role in our feeding behaviours not to mention on our flavour perception. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  16. Prestimulus influences on auditory perception from sensory representations and decision processes.

    Science.gov (United States)

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  17. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  18. Potencial evocado auditivo tardio relacionado a eventos (P300 na síndrome de Down Late auditory event-related evoked potential (P300 in Down's syndrome patients

    Directory of Open Access Journals (Sweden)

    Carla Patrícia Hernandez Alves Ribeiro César

    2010-04-01

    Full Text Available A síndrome de Down é causada pela trissomia do cromossomo 21 e está associada com alteração do processamento auditivo, distúrbio de aprendizagem e, provavelmente, início precoce de Doença de Alzheimer. OBJETIVO: Avaliar as latências e amplitudes do potencial evocado auditivo tardio relacionado a eventos (P300 e suas alterações em indivíduos jovens adultos com síndrome de Down. MATERIAL E MÉTODO: Estudo de caso prospectivo. Latências e amplitudes do P300 foram avaliadas em 17 indivíduos com síndrome de Down e 34 indivíduos sadios. RESULTADOS: Foram identificadas latências do P300 (N1, P2, N2 e P3 prolongadas e amplitude N2 - P3 diminuída nos indivíduos com síndrome de Down quando comparados ao grupo controle. CONCLUSÃO: Em indivíduos jovens adultos com síndrome de Down ocorre aumento das latências N1, P2, N2 e P3, e diminuição significativa da amplitude N2-P3 do potencial evocado auditivo tardio relacionado a eventos (P300, sugerindo prejuízo da integração da área de associação auditiva com as áreas corticais e subcorticais do sistema nervoso central.Down syndrome is caused by a trisomy of chromosome 21 and is associated with central auditory processing deficit, learning disability and, probably, early-onset Alzheimer's disease. AIM: to evaluate the latencies and amplitudes of evoked late auditory potential related to P300 events and their changes in young adults with Down's syndrome. MATERIALS AND METHODS: Prospective case study. P300 test latency and amplitudes were evaluated in 17 individuals with Down's syndrome and 34 healthy individuals. RESULTS The P300 latency (N1, P2, N2 and P3 was longer and the N2-P3 amplitude was lower in individuals with Down syndrome when compared to those in the control group. CONCLUSION: In young adults with Down syndrome, N1, P2, N2 and P3 latencies of late auditory evoked potential related to P300 events were prolonged, and N2 - P3 amplitudes were significantly reduced

  19. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    Science.gov (United States)

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  20. Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.

    Science.gov (United States)

    Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo

    2014-04-01

    Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.

  1. How may the basal ganglia contribute to auditory categorization and speech perception?

    Directory of Open Access Journals (Sweden)

    Sung-Joo eLim

    2014-08-01

    Full Text Available Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.

  2. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  3. Auditory processing, speech perception and phonological ability in pre-school children at high-risk for dyslexia: a longitudinal study of the auditory temporal processing theory

    OpenAIRE

    Boets, Bart; Wouters, Jan; Van Wieringen, Astrid; Ghesquière, Pol

    2007-01-01

    This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first gra...

  4. Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2018-04-01

    Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.

  5. Short- and long-term habituation of auditory event-related potentials in the rat [v1; ref status: indexed, http://f1000r.es/1l3

    Directory of Open Access Journals (Sweden)

    Kestutis Gurevicius

    2013-09-01

    Full Text Available An auditory oddball paradigm in humans generates a long-duration cortical negative potential, often referred to as mismatch negativity. Similar negativity has been documented in monkeys and cats, but it is controversial whether mismatch negativity also exists in awake rodents. To this end, we recorded cortical and hippocampal evoked responses in rats during alert immobility under a typical passive oddball paradigm that yields mismatch negativity in humans. The standard stimulus was a 9 kHz tone and the deviant either 7 or 11 kHz tone in the first condition. We found no evidence of a sustained potential shift when comparing evoked responses to standard and deviant stimuli. Instead, we found repetition-induced attenuation of the P60 component of the combined evoked response in the cortex, but not in the hippocampus. The attenuation extended over three days of recording and disappeared after 20 intervening days of rest. Reversal of the standard and deviant tones resulted is a robust enhancement of the N40 component not only in the cortex but also in the hippocampus. Responses to standard and deviant stimuli were affected similarly. Finally, we tested the effect of scopolamine in this paradigm. Scopolamine attenuated cortical N40 and P60 as well as hippocampal P60 components, but had no specific effect on the deviant response. We conclude that in an oddball paradigm the rat demonstrates repetition-induced attenuation of mid-latency responses, which resembles attenuation of the N1-component of human auditory evoked potential, but no mismatch negativity.

  6. Perception and psychological evaluation for visual and auditory environment based on the correlation mechanisms

    Science.gov (United States)

    Fujii, Kenji

    2002-06-01

    In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.

  7. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  8. Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex.

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jérémie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaëtan; Aguera, Pierre-Emmanuel; Daligault, Sébastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-05-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch

  9. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation.

    Science.gov (United States)

    Sameiro-Barbosa, Catia M; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.

  10. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation

    Science.gov (United States)

    Sameiro-Barbosa, Catia M.; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306

  11. How musical expertise shapes speech perception: evidence from auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  12. Beyond Auditory Sensory Processing Deficits: Lexical Tone Perception Deficits in Chinese Children With Developmental Dyslexia.

    Science.gov (United States)

    Tong, Xiuhong; Tong, Xiuli; King Yiu, Fung

    Increasing evidence suggests that children with developmental dyslexia exhibit a deficit not only at the segmental level of phonological processing but also, by extension, at the suprasegmental level. However, it remains unclear whether such a suprasegmental phonological processing deficit is due to a difficulty in processing acoustic cues of speech rhythm, such as rise time and intensity. This study set out to investigate to what extent suprasegmental phonological processing (i.e., Cantonese lexical tone perception) and rise time sensitivity could distinguish Chinese children with dyslexia from typically developing children. Sixteen children with dyslexia and 44 age-matched controls were administered a Cantonese lexical tone perception task, psychoacoustic tasks, a nonverbal reasoning ability task, and word reading and dictation tasks. Children with dyslexia performed worse than controls on Cantonese lexical tone perception, rise time, and intensity. Furthermore, Cantonese lexical tone perception appeared to be a stable indicator that distinguishes children with dyslexia from controls, even after controlling for basic auditory processing skills. These findings suggest that suprasegmental phonological processing (i.e., lexical tone perception) is a potential factor that accounts for reading difficulty in Chinese.

  13. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Science.gov (United States)

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  14. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Directory of Open Access Journals (Sweden)

    Qingcui eWang

    2015-05-01

    Full Text Available Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ or ‘group motion’. In element motion, the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in group motion, both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside. Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of group motion as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps. The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  15. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  16. Vestibular Stimulation and Auditory Perception in Children with Attention Deficit Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Azin Salamati

    2014-09-01

    Full Text Available Objectives: Rehabilitation strategies play a pivotal role in reliving the inappropriate behaviors and improving children's performance during school. Concentration and visual and auditory comprehension in children are crucial to effective learning and have drawn interest from researchers and clinicians. Vestibular function deficits usually cause high level of alertness and vigilance, and problems in maintaining focus, paying selective attention, and altering in precision and attention to the stimulus. The aim of this study is to investigate the correlation between vestibular stimulation and auditory perception in children with attention deficit hyperactivity disorder. Methods: Totally 30 children aged from 7 to 12 years with attention deficit hyperactivity disorder participated in this study. They were assessed based on the criteria of diagnostic and statistical manual of mental disorders. After obtaining guardian and parental consent, they were enrolled and randomly matched on age to two groups of intervention and control. Integrated visual and auditory continuous performance test was carried out as a pre-test. Those in the intervention group received vestibular stimulation during the therapy sessions, twice a week for 10 weeks. At the end the test was done to both groups as post-test. Results: The pre-and post-test scores were measured and compared the differences between means for two subject groups. Statistical analyses found a significant difference for the mean differences regarding auditory comprehension improvement. Discussion: The findings suggest that vestibular training is a reliable and powerful option treatment for attention deficit hyperactivity disorder especially along with other trainings, meaning that stimulating the sense of balance highlights the importance of interaction between inhabitation and cognition.

  17. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  18. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    Science.gov (United States)

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Generalization of conditioned suppression during salicylate-induced phantom auditory perception in rats.

    Science.gov (United States)

    Brennan, J F; Jastreboff, P J

    1991-01-01

    Tonal frequency generalization was examined in a total of 114 pigmented male rats, 60 of which were tested under the influence of salicylate-induced phantom auditory perception, introduced before or after lick suppression training. Thirty control subjects received saline injections, and the remaining 24 subjects served as noninjected controls of tonal background effects on generalization. Rats were continuously exposed to background noise alone or with a superimposed tone. Offset of background noise alone (Experiment I), or combined with onset or continuation of the tone (Experiments II and III) served as the conditioned stimulus (CS). In Experiment I, tone presentations were introduced only after suppression training. Depending on the time of salicylate introduction, a strong and differential influence on generalization gradients was observed, which is consistent with subjects' detection of salicylate-induced, high-pitched sound. Moreover, when either 12- or 3 kHz tones were introduced before or after Pavlovian training to mimic salicylate effects in 24 rats, the distortions in generalization gradients resembled trends obtained from respective salicylate injected groups. Experiments II and III were aimed at evaluating the masking effect of salicylate-induced phantom auditory perception on external sounds, with a 5- or a 10-kHz tone imposed continuously on the noise or presented only during the CS. Tests of tonal generalization to frequencies ranging from 4- to 11- kHz showed that in this experimental context salicylate-induced perception did not interfere with the dominant influence of external tones, a result that further strengthens the conclusion of Experiment I.

  20. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  1. Longitudinal Study of Speech Perception, Speech, and Language for Children with Hearing Loss in an Auditory-Verbal Therapy Program

    Science.gov (United States)

    Dornan, Dimity; Hickson, Louise; Murdoch, Bruce; Houston, Todd

    2009-01-01

    This study examined the speech perception, speech, and language developmental progress of 25 children with hearing loss (mean Pure-Tone Average [PTA] 79.37 dB HL) in an auditory verbal therapy program. Children were tested initially and then 21 months later on a battery of assessments. The speech and language results over time were compared with…

  2. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners with Normal Hearing

    Science.gov (United States)

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…

  3. Speech perception enhancement in elderly hearing aid users using an auditory training program for mobile devices.

    Science.gov (United States)

    Yu, Jyaehyoung; Jeon, Hanjae; Song, Changgeun; Han, Woojae

    2017-01-01

    The goal of the present study was to develop an auditory training program using a mobile device and to test its efficacy by applying it to older adults suffering from moderate-to-severe sensorineural hearing loss. Among the 20 elderly hearing-impaired listeners who participated, 10 were randomly assigned to a training group (TG) and 10 were assigned to a non-training group (NTG) as a control. As a baseline, all participants were measured by vowel, consonant and sentence tests. In the experiment, the TG had been trained for 4 weeks using a mobile program, which had four levels and consisted of 10 Korean nonsense syllables, with each level completed in 1 week. In contrast, traditional auditory training had been provided for the NTG during the same period. To evaluate whether a training effect was achieved, the two groups also carried out the same tests as the baseline after completing the experiment. The results showed that performance on the consonant and sentence tests in the TG was significantly increased compared with that of the NTG. Also, improved scores of speech perception were retained at 2 weeks after the training was completed. However, vowel scores were not changed after the 4-week training in both the TG and the NTG. This result pattern suggests that a moderate amount of auditory training using the mobile device with cost-effective and minimal supervision is useful when it is used to improve the speech understanding of older adults with hearing loss. Geriatr Gerontol Int 2017; 17: 61-68. © 2015 Japan Geriatrics Society.

  4. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2013-10-01

    Full Text Available Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic imaging (fMRI studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesised, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a lateralisation gradient. This increasing leftward lateralisation was particularly evident for the left superior temporal sulcus (STS and more anterior parts of the temporal lobe.

  5. Mapping a lateralization gradient within the ventral stream for auditory speech perception.

    Science.gov (United States)

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.

  6. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    Science.gov (United States)

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  7. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Science.gov (United States)

    Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965

  8. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    Directory of Open Access Journals (Sweden)

    Ignacio Spiousas

    2017-06-01

    Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on

  9. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  10. Auditory, Visual, and Auditory-Visual Perception of Emotions by Individuals with Cochlear Implants, Hearing Aids, and Normal Hearing

    Science.gov (United States)

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…

  11. The effect of auditory perception training on reading performance of the 8-9-year old female students with dyslexia: A preliminary study

    Directory of Open Access Journals (Sweden)

    Nafiseh Vatandoost

    2014-01-01

    Full Text Available Background and Aim: Dyslexia is the most common learning disability. One of the main factors have role in this disability is auditory perception imperfection that cause a lot of problems in education. We aimed to study the effect of auditory perception training on reading performance of female students with dyslexia at the third grade of elementary school.Methods: Thirty-eight female students at the third grade of elementary schools of Khomeinishahr City, Iran, were selected by multistage cluster random sampling of them, 20 students which were diagnosed dyslexic by Reading test and Wechsler test, devided randomly to two equal groups of experimental and control. For experimental group, during ten 45-minute sessions, auditory perception training were conducted, but no intervention was done for control group. An participants were re-assessed by Reading test after the intervention (pre- and post- test method. Data were analyed by covariance test.Results: The effect of auditory perception training on reading performance (81% was significant (p<0.0001 for all subtests execpt the separate compound word test.Conclusion: Findings of our study confirm the hypothesis that auditory perception training effects on students' functional reading. So, auditory perception training seems to be necessary for the students with dyslexia.

  12. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    Science.gov (United States)

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  13. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L

    2017-05-24

    Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition. After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.

  14. Effects of auditory vection speed and directional congruence on perceptions of visual vection

    Science.gov (United States)

    Gagliano, Isabella Alexis

    Spatial disorientation is a major contributor to aircraft mishaps. One potential contributing factor is vection, an illusion of self-motion. Although vection is commonly thought of as a visual illusion, it can also be produced through audition. The purpose of the current experiment was to explore interactions between conflicting visual and auditory vection cues, specifically with regard to the speed and direction of rotation. The ultimate goal was to explore the extent to which aural vection could diminish or enhance the perception of visual vection. The study used a 3 x 2 within-groups factorial design. Participants were exposed to three levels of aural rotation velocity (slower, matched, and faster, relative to visual rotation speed) and two levels of aural rotational congruence (congruent or incongruent rotation) including two control conditions (visual and aural-only). Dependent measures included vection onset time, vection direction judgements, subjective vection strength ratings, vection speed ratings, and horizontal nystagmus frequency. Subjective responses to motion were assessed pre and post treatment, and oculomotor responses were assessed before, during, and following exposure to circular vection. The results revealed a significant effect of stimulus condition on vection strength. Specifically, directionally-congruent aural-visual vection resulted in significantly stronger vection than visual and aural vection alone. Perceptions of directionally-congruent aural-visual vection were slightly stronger vection than directionally-incongruent aural-visual vection, but not significantly so. No significant effects of aural rotation velocity on vection strength were observed. The results suggest directionally-incongruent aural vection could be used as a countermeasure for visual vection and directionally-congruent aural vection could be used to improve vection in virtual environments, provided further research is done.

  15. CNTN6 mutations are risk factors for abnormal auditory sensory perception in autism spectrum disorders.

    Science.gov (United States)

    Mercati, O; Huguet, G; Danckaert, A; André-Leroux, G; Maruani, A; Bellinzoni, M; Rolland, T; Gouder, L; Mathieu, A; Buratti, J; Amsellem, F; Benabou, M; Van-Gils, J; Beggiato, A; Konyukh, M; Bourgeois, J-P; Gazzellone, M J; Yuen, R K C; Walker, S; Delépine, M; Boland, A; Régnault, B; Francois, M; Van Den Abbeele, T; Mosca-Boidron, A L; Faivre, L; Shimoda, Y; Watanabe, K; Bonneau, D; Rastam, M; Leboyer, M; Scherer, S W; Gillberg, C; Delorme, R; Cloëz-Tayarani, I; Bourgeron, T

    2017-04-01

    Contactin genes CNTN5 and CNTN6 code for neuronal cell adhesion molecules that promote neurite outgrowth in sensory-motor neuronal pathways. Mutations of CNTN5 and CNTN6 have previously been reported in individuals with autism spectrum disorders (ASDs), but very little is known on their prevalence and clinical impact. In this study, we identified CNTN5 and CNTN6 deleterious variants in individuals with ASD. Among the carriers, a girl with ASD and attention-deficit/hyperactivity disorder was carrying five copies of CNTN5. For CNTN6, both deletions (6/1534 ASD vs 1/8936 controls; P=0.00006) and private coding sequence variants (18/501 ASD vs 535/33480 controls; P=0.0005) were enriched in individuals with ASD. Among the rare CNTN6 variants, two deletions were transmitted by fathers diagnosed with ASD, one stop mutation CNTN6 W923X was transmitted by a mother to her two sons with ASD and one variant CNTN6 P770L was found de novo in a boy with ASD. Clinical investigations of the patients carrying CNTN5 or CNTN6 variants showed that they were hypersensitive to sounds (a condition called hyperacusis) and displayed changes in wave latency within the auditory pathway. These results reinforce the hypothesis of abnormal neuronal connectivity in the pathophysiology of ASD and shed new light on the genes that increase risk for abnormal sensory perception in ASD.

  16. GRM7 variants associated with age-related hearing loss based on auditory perception

    Science.gov (United States)

    Newman, Dina L.; Fisher, Laurel M.; Ohmen, Jeffrey; Parody, Robert; Fong, Chin-To; Frisina, Susan T.; Mapes, Frances; Eddins, David A.; Frisina, D. Robert; Frisina, Robert D.; Friedman, Rick A.

    2012-01-01

    Age-related hearing impairment (ARHI), or presbycusis, is a common condition of the elderly that results in significant communication difficulties in daily life. Clinically, it has been defined as a progressive loss of sensitivity to sound, starting at the high frequencies, inability to understand speech, lengthening of the minimum discernable temporal gap in sounds, and a decrease in the ability to filter out background noise. The causes of presbycusis are likely a combination of environmental and genetic factors. Previous research into the genetics of presbycusis has focused solely on hearing as measured by pure-tone thresholds. A few loci have been identified, based on a best ear pure-tone average phenotype, as having a likely role in susceptibility to this type of hearing loss; and GRM7 is the only gene that has achieved genome-wide significance. We examined the association of GRM7 variants identified from the previous study, which used an European cohort with Z-scores based on pure-tone thresholds, in a European–American population from Rochester, NY (N = 687), and used novel phenotypes of presbycusis. In the present study mixed modeling analyses were used to explore the relationship of GRM7 haplotype and SNP genotypes with various measures of auditory perception. Here we show that GRM7 alleles are associated primarily with peripheral measures of hearing loss, and particularly with speech detection in older adults. PMID:23102807

  17. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  18. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  19. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    Directory of Open Access Journals (Sweden)

    Beverly Hannah

    2017-12-01

    Full Text Available Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF and Auditory-FacialGestural (AFG inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  20. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    Science.gov (United States)

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by

  1. Musical experience, auditory perception and reading-related skills in children.

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.

  2. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  3. Towards an understanding of the mechanisms of weak central coherence effects: experiments in visual configural learning and auditory perception.

    Science.gov (United States)

    Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma

    2003-01-01

    The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects. PMID:12639334

  4. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  5. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  6. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    Science.gov (United States)

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  7. Effect of 24 Hours of Sleep Deprivation on Auditory and Linguistic Perception: A Comparison among Young Controls, Sleep-Deprived Participants, Dyslexic Readers, and Aging Adults

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-01-01

    Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…

  8. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  9. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  10. From perception to metacognition: Auditory and olfactory functions in early blind, late blind, and sighted individuals

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2016-09-01

    Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.

  11. A loudspeaker-based room auralisation (LoRA) system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    Most research on understanding the signal processing of the auditory system has been realized in anechoic or almost anechoic environments. The knowledge derived from these experiments cannot be directly transferred to reverberant environments. In order to investigate the auditory signal processing...... are utilized to realise highly authentic room reverberation. This system aims at providing a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. An overall description...

  12. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  13. Proprioceptive event related potentials: gating and task effects

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2005-01-01

    The integration of proprioception with vision, touch or audition is considered basic to the developmental formation of perceptions, conceptual objects and the creation of cognitive schemes. Thus, mapping of proprioceptive information processing is important in cognitive research. A stimulus...... of a brisk change of weight on a hand held load elicit a proprioceptive evoked potential (PEP). Here this is used to examine early and late information processing related to weight discrimination by event related potentials (ERP)....

  14. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  15. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  16. Effects of inter- and intramodal selective attention to non-spatial visual stimuli: An event-related potential analysis.

    NARCIS (Netherlands)

    de Ruiter, M.B.; Kok, A.; van der Schoot, M.

    1998-01-01

    Event-related potentials (ERPs) were recorded to trains of rapidly presented auditory and visual stimuli. ERPs in conditions in which Ss attended to different features of visual stimuli were compared with ERPs to the same type of stimuli when Ss attended to different features of auditory stimuli,

  17. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  18. Neural Networks for Segregation of Multiple Objects: Visual Figure-Ground Separation and Auditory Pitch Perception.

    Science.gov (United States)

    Wyse, Lonce

    An important component of perceptual object recognition is the segmentation into coherent perceptual units of the "blooming buzzing confusion" that bombards the senses. The work presented herein develops neural network models of some key processes of pre-attentive vision and audition that serve this goal. A neural network model, called an FBF (Feature -Boundary-Feature) network, is proposed for automatic parallel separation of multiple figures from each other and their backgrounds in noisy images. Figure-ground separation is accomplished by iterating operations of a Boundary Contour System (BCS) that generates a boundary segmentation of a scene, and a Feature Contour System (FCS) that compensates for variable illumination and fills-in surface properties using boundary signals. A key new feature is the use of the FBF filling-in process for the figure-ground separation of connected regions, which are subsequently more easily recognized. The new CORT-X 2 model is a feed-forward version of the BCS that is designed to detect, regularize, and complete boundaries in up to 50 percent noise. It also exploits the complementary properties of on-cells and off -cells to generate boundary segmentations and to compensate for boundary gaps during filling-in. In the realm of audition, many sounds are dominated by energy at integer multiples, or "harmonics", of a fundamental frequency. For such sounds (e.g., vowels in speech), the individual frequency components fuse, so that they are perceived as one sound source with a pitch at the fundamental frequency. Pitch is integral to separating auditory sources, as well as to speaker identification and speech understanding. A neural network model of pitch perception called SPINET (SPatial PItch NETwork) is developed and used to simulate a broader range of perceptual data than previous spectral models. The model employs a bank of narrowband filters as a simple model of basilar membrane mechanics, spectral on-center off-surround competitive

  19. Operator auditory perception and spectral quantification of umbilical artery Doppler ultrasound signals.

    Directory of Open Access Journals (Sweden)

    Ann Thuring

    Full Text Available OBJECTIVE: An experienced sonographer can by listening to the Doppler audio signals perceive various timbres that distinguish different types of umbilical artery flow despite an unchanged pulsatility index (PI. Our aim was to develop an objective measure of the Doppler audio signals recorded from fetoplacental circulation in a sheep model. METHODS: Various degrees of pathological flow velocity waveforms in the umbilical artery, similar to those in human complicated pregnancies, were induced by microsphere embolization of the placental bed (embolization model, 7 lamb fetuses, 370 Doppler recordings or by fetal hemodilution (anemia model, 4 lamb fetuses, 184 recordings. A subjective 11-step operator auditory scale (OAS was related to conventional Doppler parameters, PI and time average mean velocity (TAM, and to sound frequency analysis of Doppler signals (sound frequency with the maximum energy content [MAXpeak] and frequency band at maximum level minus 15 dB [MAXpeak-15 dB] over several heart cycles. RESULTS: WE FOUND A NEGATIVE CORRELATION BETWEEN THE OAS AND PI: median Rho -0.73 (range -0.35- -0.94 and -0.68 (range -0.57- -0.78 in the two lamb models, respectively. There was a positive correlation between OAS and TAM in both models: median Rho 0.80 (range 0.58-0.95 and 0.90 (range 0.78-0.95, respectively. A strong correlation was found between TAM and the results of sound spectrum analysis; in the embolization model the median r was 0.91 (range 0.88-0.97 for MAXpeak and 0.91 (range 0.82-0.98 for MAXpeak-15 dB. In the anemia model, the corresponding values were 0.92 (range 0.78-0.96 and 0.96 (range 0.89-0.98, respectively. CONCLUSION: Audio-spectrum analysis reflects the subjective perception of Doppler sound signals in the umbilical artery and has a strong correlation to TAM-velocity. This information might be of importance for clinical management of complicated pregnancies as an addition to conventional Doppler parameters.

  20. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  1. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    Science.gov (United States)

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  2. The Role of Temporal Envelope and Fine Structure in Mandarin Lexical Tone Perception in Auditory Neuropathy Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Shuo Wang

    Full Text Available Temporal information in a signal can be partitioned into temporal envelope (E and fine structure (FS. Fine structure is important for lexical tone perception for normal-hearing (NH listeners, and listeners with sensorineural hearing loss (SNHL have an impaired ability to use FS in lexical tone perception due to the reduced frequency resolution. The present study was aimed to assess which of the acoustic aspects (E or FS played a more important role in lexical tone perception in subjects with auditory neuropathy spectrum disorder (ANSD and to determine whether it was the deficit in temporal resolution or frequency resolution that might lead to more detrimental effects on FS processing in pitch perception. Fifty-eight native Mandarin Chinese-speaking subjects (27 with ANSD, 16 with SNHL, and 15 with NH were assessed for (1 their ability to recognize lexical tones using acoustic E or FS cues with the "auditory chimera" technique, (2 temporal resolution as measured with temporal gap detection (TGD threshold, and (3 frequency resolution as measured with the Q(10dB values of the psychophysical tuning curves. Overall, 26.5%, 60.2%, and 92.1% of lexical tone responses were consistent with FS cues for tone perception for listeners with ANSD, SNHL, and NH, respectively. The mean TGD threshold was significantly higher for listeners with ANSD (11.9 ms than for SNHL (4.0 ms; p < 0.001 and NH (3.9 ms; p < 0.001 listeners, with no significant difference between SNHL and NH listeners. In contrast, the mean Q(10dB for listeners with SNHL (1.8 ± 0.4 was significantly lower than that for ANSD (3.5 ± 1.0; p < 0.001 and NH (3.4 ± 0.9; p < 0.001 listeners, with no significant difference between ANSD and NH listeners. These results suggest that reduced temporal resolution, as opposed to reduced frequency selectivity, in ANSD subjects leads to greater degradation of FS processing for pitch perception.

  3. Bird brains and songs : Neural mechanisms of auditory memory and perception in zebra finches

    NARCIS (Netherlands)

    Gobes, S.M.H.|info:eu-repo/dai/nl/304832669

    2009-01-01

    Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory

  4. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    Science.gov (United States)

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  5. Discussion: Changes in Vocal Production and Auditory Perception after Hair Cell Regeneration.

    Science.gov (United States)

    Ryals, Brenda M.; Dooling, Robert J.

    2000-01-01

    A bird study found that with sufficient time and training after hair cell and hearing loss and hair cell regeneration, the mature avian auditory system can accommodate input from a newly regenerated periphery sufficiently to allow for recognition of previously familiar vocalizations and the learning of new complex acoustic classifications.…

  6. The Effects of Compensatory Auditory Stimulation and High-Definition Transcranial Direct Current Stimulation (HD-tDCS) on Tinnitus Perception - A Randomized Pilot Study.

    Science.gov (United States)

    Henin, Simon; Fein, Dovid; Smouha, Eric; Parra, Lucas C

    2016-01-01

    Tinnitus correlates with elevated hearing thresholds and reduced cochlear compression. We hypothesized that reduced peripheral input leads to elevated neuronal gain resulting in the perception of a phantom sound. The purpose of this pilot study was to test whether compensating for this peripheral deficit could reduce the tinnitus percept acutely using customized auditory stimulation. To further enhance the effects of auditory stimulation, this intervention was paired with high-definition transcranial direct current stimulation (HD-tDCS). A randomized sham-controlled, single blind study was conducted in a clinical setting on adult participants with chronic tinnitus (n = 14). Compensatory auditory stimulation (CAS) and HD-tDCS were administered either individually or in combination in order to access the effects of both interventions on tinnitus perception. CAS consisted of sound exposure typical to daily living (20-minute sound-track of a TV show), which was adapted with compressive gain to compensate for deficits in each subject's individual audiograms. Minimum masking levels and the visual analog scale were used to assess the strength of the tinnitus percept immediately before and after the treatment intervention. CAS reduced minimum masking levels, and visual analog scale trended towards improvement. Effects of HD-tDCS could not be resolved with the current sample size. The results of this pilot study suggest that providing tailored auditory stimulation with frequency-specific gain and compression may alleviate tinnitus in a clinical population. Further experimentation with longer interventions is warranted in order to optimize effect sizes.

  7. Olfactory short-term memory encoding and maintenance - an event-related potential study.

    Science.gov (United States)

    Lenk, Steffen; Bluschke, Annet; Beste, Christian; Iannilli, Emilia; Rößner, Veit; Hummel, Thomas; Bender, Stephan

    2014-09-01

    This study examined whether the memory encoding and short term maintenance of olfactory stimuli is associated with neurophysiological activation patterns which parallel those described for sensory modalities such as vision and auditory. We examined olfactory event-related potentials in an olfactory change detection task in twenty-four healthy adults and compared the measured activation to that found during passive olfactory stimulation. During the early olfactory post-processing phase, we found a sustained negativity over bilateral frontotemporal areas in the passive perception condition which was enhanced in the active memory task. There was no significant lateralization in either experimental condition. During the maintenance interval at the end of the delay period, we still found sustained activation over bilateral frontotemporal areas which was more negative in trials with correct - as compared to incorrect - behavioural responses. This was complemented by a general significantly stronger frontocentral activation. Summarizing, we were able to show that olfactory short term memory involves a parallel sequence of activation as found in other sensory modalities. In addition to olfactory-specific frontotemporal activations in the memory encoding phase, we found slow cortical potentials over frontocentral areas during the memory maintenance phase indicating the activation of a supramodal memory maintenance system. These findings could represent the neurophysiological underpinning of the 'olfactory flacon', the olfactory counter-part to the visual sketchpad and phonological loop embedded in Baddeley's working memory model. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    body movements that affect localization performance are tipping the chin toward the chest, tilting the body, or pivoting the head toward one or the... Compensatory Strategies . Brain 2002, 125, 1039–1053. Lee, P. L.; Wang, J. H. The Simulation of Binaural Hearing Caused by a Moving Sound Source...during the listening task changes the listener’s 140 listening plane. With a tilted head , the listener is pointing in an oblique plane that

  9. Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I

    Science.gov (United States)

    1991-01-22

    conceptual model of the processes whereby the human listener converts the acoustic signal into a string of phonetic elements could be successfully implemented...perceptual aspect is implied. It is within the broad framwork described above that the auditory-perceptual theory will be considered. But before beginning...perceptual and not acoustic or sensory. For example, it is planned to conceptualize the target zones for stops as being physically unrealizable by letting

  10. An investigation of the auditory perception of western lowland gorillas in an enrichment study.

    Science.gov (United States)

    Brooker, Jake S

    2016-09-01

    Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Single-Sided Deafness: Impact of Cochlear Implantation on Speech Perception in Complex Noise and on Auditory Localization Accuracy.

    Science.gov (United States)

    Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias

    2017-12-01

    To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.

  12. Simultaneity and Temporal Order Judgments Are Coded Differently and Change With Age: An Event-Related Potential Study

    Directory of Open Access Journals (Sweden)

    Aysha Basharat

    2018-04-01

    Full Text Available Multisensory integration is required for a number of daily living tasks where the inability to accurately identify simultaneity and temporality of multisensory events results in errors in judgment leading to poor decision-making and dangerous behavior. Previously, our lab discovered that older adults exhibited impaired timing of audiovisual events, particularly when making temporal order judgments (TOJs. Simultaneity judgments (SJs, however, were preserved across the lifespan. Here, we investigate the difference between the TOJ and SJ tasks in younger and older adults to assess neural processing differences between these two tasks and across the lifespan. Event-related potentials (ERPs were studied to determine between-task and between-age differences. Results revealed task specific differences in perceiving simultaneity and temporal order, suggesting that each task may be subserved via different neural mechanisms. Here, auditory N1 and visual P1 ERP amplitudes confirmed that unisensory processing of audiovisual stimuli did not differ between the two tasks within both younger and older groups, indicating that performance differences between tasks arise either from multisensory integration or higher-level decision-making. Compared to younger adults, older adults showed a sustained higher auditory N1 ERP amplitude response across SOAs, suggestive of broader response properties from an extended temporal binding window. Our work provides compelling evidence that different neural mechanisms subserve the SJ and TOJ tasks and that simultaneity and temporal order perception are coded differently and change with age.

  13. A new method for detecting interactions between the senses in event-related potentials

    DEFF Research Database (Denmark)

    Gondan, Matthias; Röder, B.

    2006-01-01

    Event-related potentials (ERPs) can be used in multisensory research to determine the point in time when different senses start to interact, for example, the auditory and the visual system. For this purpose, the ERP to bimodal stimuli (AV) is often compared to the sum of the ERPs to auditory (A......) and visual (V) stimuli: AV - (A + V). If the result is non-zero, this is interpreted as an indicator for multisensory interactions. Using this method, several studies have demonstrated auditory-visual interactions as early as 50 ms after stimulus onset. The subtraction requires that A, V, and AV do...... not contain common activity: This activity would be subtracted twice from one ERP and would, therefore, contaminate the result. In the present study, ERPs to unimodal, bimodal, and trimodal auditory, visual, and tactile stimuli (T) were recorded. We demonstrate that (T + TAV) - (TA + TV) is equivalent to AV...

  14. Impaired Pitch Perception and Memory in Congenital Amusia: The Deficit Starts in the Auditory Cortex

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jeremie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaetan; Aguera, Pierre-Emmanuel; Daligault, Sebastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-01-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a…

  15. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    Science.gov (United States)

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Functional associations at global brain level during perception of an auditory illusion by applying maximal information coefficient

    Science.gov (United States)

    Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos

    2018-02-01

    Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.

  17. Context-dependent Changes in Functional Connectivity of Auditory Cortices during the Perception of Object Words

    NARCIS (Netherlands)

    Dam, W.O. van; Dongen, E.V. van; Bekkering, H.; Rüschemeyer, S.A.

    2012-01-01

    Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms

  18. Context-dependent changes in functional connectivity of auditory cortices during the perception of object words

    NARCIS (Netherlands)

    van Dam, W.O.; Dongen, E.V. van; Bekkering, H.; Rueschemeyer, S.A.

    2012-01-01

    Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms

  19. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    Science.gov (United States)

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior

  20. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  1. The Neuromagnetic Dynamics of Time Perception

    OpenAIRE

    Carver, Frederick W.; Elvevåg, Brita; Altamura, Mario; Weinberger, Daniel R.; Coppola, Richard

    2012-01-01

    Examining real-time cortical dynamics is crucial for understanding time perception. Using magnetoencephalography we studied auditory duration discrimination of short (.5 s) versus a pitch control. Time-frequency analysis of event-related fields showed widespread beta-band (13-30 Hz) desynchronization during all tone presentations. Synthetic aperture magnetometry indicated automatic primarily sensorimotor responses in short and pitch conditions, with activation specific to timing in bilateral ...

  2. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  3. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  4. Hearing Aid-Induced Plasticity in the Auditory System of Older Adults: Evidence from Speech Perception

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph

    2015-01-01

    Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-­in-­noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…

  5. Evaluating auditory perception and communication demands required to carry out work tasks and complimentary hearing resources and skills for older workers with hearing loss.

    Science.gov (United States)

    Jennings, M B; Shaw, L; Hodgins, H; Kuchar, D A; Bataghva, L Poost-Foroosh

    2010-01-01

    For older workers with acquired hearing loss, this loss as well as the changing nature of work and the workforce, may lead to difficulties and disadvantages in obtaining and maintaining employment. Currently there are very few instruments that can assist workplaces, employers and workers to prepare for older workers with hearing loss or with the evaluation of auditory perception demands of work, especially those relevant to communication, and safety sensitive workplaces that require high levels of communication. This paper introduces key theoretical considerations that informed the development of a new framework, The Audiologic Ergonomic (AE) Framework to guide audiologists, work rehabilitation professionals and workers in developing tools to support the identification and evaluation of auditory perception demands in the workplace, the challenges to communication and the subsequent productivity and safety in the performance of work duties by older workers with hearing loss. The theoretical concepts underpinning this framework are discussed along with next steps in developing tools such as the Canadian Hearing Demands Tool (C-HearD Tool) in advancing approaches to evaluate auditory perception and communication demands in the workplace.

  6. Crossmodal effects of Guqin and piano music on selective attention: an event-related potential study.

    Science.gov (United States)

    Zhu, Weina; Zhang, Junjun; Ding, Xiaojun; Zhou, Changle; Ma, Yuanye; Xu, Dan

    2009-11-27

    To compare the effects of music from different cultural environments (Guqin: Chinese music; piano: Western music) on crossmodal selective attention, behavioral and event-related potential (ERP) data in a standard two-stimulus visual oddball task were recorded from Chinese subjects in three conditions: silence, Guqin music or piano music background. Visual task data were then compared with auditory task data collected previously. In contrast with the results of the auditory task, the early (N1) and late (P300) stages exhibited no differences between Guqin and piano backgrounds during the visual task. Taking our previous study and this study together, we can conclude that: although the cultural-familiar music influenced selective attention both in the early and late stages, these effects appeared only within a sensory modality (auditory) but not in cross-sensory modalities (visual). Thus, the musical cultural factor is more obvious in intramodal than in crossmodal selective attention.

  7. Cranial pneumatization and auditory perceptions of the oviraptorid dinosaur Conchoraptor gracilis (Theropoda, Maniraptora) from the Late Cretaceous of Mongolia

    Science.gov (United States)

    Ramanah, D.; Raghunath, S.; Mee, D. J.; Rösgen, T.; Jacobs, P. A.

    2007-09-01

    The distribution of air-filled structures in the craniofacial and neurocranial bones of the oviraptorid ZPAL MgD-I/95, discovered at the Hermiin Tsav locality, Mongolia, is restored. Based on the complete obliteration of most of the cranial sutures, the specimen is identified as an adult individual of Conchoraptor gracilis Barsbold 1986. Except for the orbitosphenoids and epipterygoids, the preserved bones of the neurocranium are hollow. Three types of tympanic recess are present in Conchoraptor, a characteristic shared with troodontids, dromaeosaurids, and avian theropods. The contralateral middle ear cavities are interconnected by the supraencephalic pathway that passes through the dorsal tympanic recesses, the posterodorsal prootic sinuses and the parietal sinus. The spatial arrangements of the middle ear cavity and a derived neurocranial pneumatic system in Conchoraptor indicate enhancements of acoustic perception in the lower-frequency registers and of auditory directionality. We further speculate that this improvement of binaural hearing could be explained as an adaptation required for accurate detection of prey and/or predators under conditions of low illumination. The other potentially pneumatic structures of the Conchoraptor cranium include (1) recessus-like irregularities on the dorsal surface of the nasal and frontal bones (a putative oviraptorid synapomorphy; pos); (2) a subotic recess; (3) a sub-condylar recess; and (4) a posterior condylar recess (pos).

  8. Perception of parents about the auditory attention skills of his kid with cleft lip and palate: retrospective study

    Directory of Open Access Journals (Sweden)

    Mondelli, Maria Fernanda Capoani Garcia

    2012-01-01

    Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.

  9. Attentional Bias in Patients with Decompensated Tinnitus: Prima Facie Evidence from Event-Related Potentials.

    Science.gov (United States)

    Li, Zhicheng; Gu, Ruolei; Zeng, Xiangli; Zhong, Weifang; Qi, Min; Cen, Jintian

    2016-01-01

    Tinnitus refers to the auditory perception of sound in the absence of external sound or electric stimuli. The influence of tinnitus on cognitive processing is at the cutting edge of ongoing tinnitus research. In this study, we adopted an objective indicator of attentional processing, i.e. the mismatch negativity (MMN), to assess the attentional bias in patients with decompensated tinnitus. Three kinds of pure tones, D1 (8,000 Hz), S (8,500 Hz) and D2 (9,000 Hz), were used to induce event-related potentials (ERPs) in the normal ear. Employing the oddball paradigm, the task was divided into two blocks in which D1 and D2 were set as deviation stimuli, respectively. Only D2 induced a significant MMN in the tinnitus group, while neither D1 nor D2 was able to induce MMN in the control group. In addition, the ERPs in the left hemisphere, which were recorded within the time window of 90-150 ms (ERP 90-150 ms), were significantly higher than those in the right hemisphere in the tinnitus group, while no significant difference was observed in the control group. Lastly, the amplitude of ERP 90-150 ms in the tinnitus group was significantly higher than that in the control group. These findings suggest that patients with decompensated tinnitus showed automatic processing of acoustic stimuli, thereby indicating that these patients allocated more cognitive resources to acoustic stimulus processing. We suggest that the difficulty in disengaging or facilitated attention of patients might underlie this phenomenon. The limitations of the current study are discussed. © 2016 S. Karger AG, Basel.

  10. Functional-structural reorganisation of the neuronal network for auditory perception in subjects with unilateral hearing loss: Review of neuroimaging studies.

    Science.gov (United States)

    Heggdal, Peder O Laugen; Brännström, Jonas; Aarstad, Hans Jørgen; Vassbotn, Flemming S; Specht, Karsten

    2016-02-01

    This paper aims to provide a review of studies using neuroimaging to measure functional-structural reorganisation of the neuronal network for auditory perception after unilateral hearing loss. A literature search was performed in PubMed. Search criterions were peer reviewed original research papers in English completed by the 11th of March 2015. Twelve studies were found to use neuroimaging in subjects with unilateral hearing loss. An additional five papers not identified by the literature search were provided by a reviewer. Thus, a total of 17 studies were included in the review. Four different neuroimaging methods were used in these studies: Functional magnetic resonance imaging (fMRI) (n = 11), diffusion tensor imaging (DTI) (n = 4), T1/T2 volumetric images (n = 2), magnetic resonance spectroscopy (MRS) (n = 1). One study utilized two imaging methods (fMRI and T1 volumetric images). Neuroimaging techniques could provide valuable information regarding the effects of unilateral hearing loss on both auditory and non-auditory performance. fMRI-studies showing a bilateral BOLD-response in patients with unilateral hearing loss have not yet been followed by DTI studies confirming their microstructural correlates. In addition, the review shows that an auditory modality-specific deficit could affect multi-modal brain regions and their connections. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Experiments on Auditory-Visual Perception of Sentences by Users of Unilateral, Bimodal, and Bilateral Cochlear Implants

    Science.gov (United States)

    Dorman, Michael F.; Liss, Julie; Wang, Shuai; Berisha, Visar; Ludwig, Cimarron; Natale, Sarah Cook

    2016-01-01

    Purpose: Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method: Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results: (a) Most CI users report that most of the time, they have access to both A and V…

  12. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  13. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  14. Changes in event-related potential functional networks predict traumatic brain injury in piglets.

    Science.gov (United States)

    Atlan, Lorre S; Lan, Ingrid S; Smith, Colin; Margulies, Susan S

    2018-06-01

    Traumatic brain injury is a leading cause of cognitive and behavioral deficits in children in the US each year. None of the current diagnostic tools, such as quantitative cognitive and balance tests, have been validated to identify mild traumatic brain injury in infants, adults and animals. In this preliminary study, we report a novel, quantitative tool that has the potential to quickly and reliably diagnose traumatic brain injury and which can track the state of the brain during recovery across multiple ages and species. Using 32 scalp electrodes, we recorded involuntary auditory event-related potentials from 22 awake four-week-old piglets one day before and one, four, and seven days after two different injury types (diffuse and focal) or sham. From these recordings, we generated event-related potential functional networks and assessed whether the patterns of the observed changes in these networks could distinguish brain-injured piglets from non-injured. Piglet brains exhibited significant changes after injury, as evaluated by five network metrics. The injury prediction algorithm developed from our analysis of the changes in the event-related potentials functional networks ultimately produced a tool with 82% predictive accuracy. This novel approach is the first application of auditory event-related potential functional networks to the prediction of traumatic brain injury. The resulting tool is a robust, objective and predictive method that offers promise for detecting mild traumatic brain injury, in particular because collecting event-related potentials data is noninvasive and inexpensive. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Event-related oscillations (EROs) and event-related potentials (ERPs) comparison in facial expression recognition.

    Science.gov (United States)

    Balconi, Michela; Pozzoli, Uberto

    2007-09-01

    The study aims to explore the significance of event-related potentials (ERPs) and event-related brain oscillations (EROs) (delta, theta, alpha, beta, gamma power) in response to emotional (fear, happiness, sadness) when compared with neutral faces during 180-250 post-stimulus time interval. The ERP results demonstrated that the emotional face elicited a negative peak at approximately 230 ms (N2). Moreover, EEG measures showed that motivational significance of face (emotional vs. neutral) could modulate the amplitude of EROs, but only for some frequency bands (i.e. theta and gamma bands). In a second phase, we considered the resemblance of the two EEG measures by a regression analysis. It revealed that theta and gamma oscillations mainly effect as oscillation activity at the N2 latency. Finally, a posterior increased power of theta was found for emotional faces.

  16. Sensory augmentation: integration of an auditory compass signal into human perception of space

    Science.gov (United States)

    Schumann, Frank; O’Regan, J. Kevin

    2017-01-01

    Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences. PMID:28195187

  17. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  18. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  19. Effects of white noise on event-related potentials in somatosensory Go/No-go paradigms.

    Science.gov (United States)

    Ohbayashi, Wakana; Kakigi, Ryusuke; Nakata, Hiroki

    2017-09-06

    Exposure to auditory white noise has been shown to facilitate human cognitive function. This phenomenon is termed stochastic resonance, and a moderate amount of auditory noise has been suggested to benefit individuals in hypodopaminergic states. The present study investigated the effects of white noise on the N140 and P300 components of event-related potentials in somatosensory Go/No-go paradigms. A Go or No-go stimulus was presented to the second or fifth digit of the left hand, respectively, at the same probability. Participants performed somatosensory Go/No-go paradigms while hearing three different white noise levels (45, 55, and 65 dB conditions). The peak amplitudes of Go-P300 and No-go-P300 in ERP waveforms were significantly larger under 55 dB than 45 and 65 dB conditions. White noise did not affect the peak latency of N140 or P300, or the peak amplitude of N140. Behavioral data for the reaction time, SD of reaction time, and error rates showed the absence of an effect by white noise. This is the first event-related potential study to show that exposure to auditory white noise at 55 dB enhanced the amplitude of P300 during Go/No-go paradigms, reflecting changes in the neural activation of response execution and inhibition processing.

  20. Different event-related patterns of gamma-band power in brain waves of fast- and slow-reacting subjects.

    Science.gov (United States)

    Jokeit, H; Makeig, S

    1994-01-01

    Fast- and slow-reacting subjects exhibit different patterns of gamma-band electroencephalogram (EEG) activity when responding as quickly as possible to auditory stimuli. This result appears to confirm long-standing speculations of Wundt that fast- and slow-reacting subjects produce speeded reactions in different ways and demonstrates that analysis of event-related changes in the amplitude of EEG activity recorded from the human scalp can reveal information about event-related brain processes unavailable using event-related potential measures. Time-varying spectral power in a selected (35- to 43-Hz) gamma frequency band was averaged across trials in two experimental conditions: passive listening and speeded reacting to binaural clicks, forming 40-Hz event-related spectral responses. Factor analysis of between-subject event-related spectral response differences split subjects into two near-equal groups composed of faster- and slower-reacting subjects. In faster-reacting subjects, 40-Hz power peaked near 200 ms and 400 ms poststimulus in the react condition, whereas in slower-reacting subjects, 40-Hz power just before stimulus delivery was larger in the react condition. These group differences were preserved in separate averages of relatively long and short reaction-time epochs for each group. gamma-band (20-60 Hz)-filtered event-related potential response averages did not differ between the two groups or conditions. Because of this and because gamma-band power in the auditory event-related potential is small compared with the EEG, the observed event-related spectral response features must represent gamma-band EEG activity reliably induced by, but not phase-locked to, experimental stimuli or events. PMID:8022783

  1. Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.

    Science.gov (United States)

    Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine

    2014-01-01

    To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. P300 component of event-related potentials in persons with asperger disorder.

    Science.gov (United States)

    Iwanami, Akira; Okajima, Yuka; Ota, Haruhisa; Tani, Masayuki; Yamada, Takashi; Yamagata, Bun; Hashimoto, Ryuichiro; Kanai, Chieko; Takashio, Osamu; Inamoto, Atsuko; Ono, Taisei; Takayama, Yukiko; Kato, Nobumasa

    2014-10-01

    In the present study, we investigated auditory event-related potentials in adults with Asperger disorder and normal controls using an auditory oddball task and a novelty oddball task. Task performance and the latencies of P300 evoked by both target and novel stimuli in the two tasks did not differ between the two groups. Analysis of variance revealed that there was a significant interaction effect between group and electrode site on the mean amplitude of the P300 evoked by novel stimuli, which indicated that there was an altered distribution of the P300 in persons with Asperger disorder. In contrast, there was no significant interaction effect on the mean P300 amplitude elicited by target stimuli. Considering that P300 comprises two main subcomponents, frontal-central-dominant P3a and parietal-dominant P3b, our results suggested that persons with Asperger disorder have enhanced amplitude of P3a, which indicated activated prefrontal function in this task.

  3. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  4. Event-related potentials dissociate perceptual from response-related age effects in visual search

    DEFF Research Database (Denmark)

    Wiegand, Iris; Müller, Hermann J.; Finke, Kathrin

    2013-01-01

    measures with lateralized event-related potentials of younger and older adults performing a compound-search task, in which the target-defining dimension of a pop-out target (color/shape) and the response-critical target feature (vertical/horizontal stripes) varied independently across trials. Slower...... responses in older participants were associated with age differences in all analyzed event-related potentials from perception to response, indicating that behavioral slowing originates from multiple stages within the information-processing stream. Furthermore, analyses of carry-over effects from one trial...

  5. Cognitive event-related potentials in comatose and post-comatose states.

    Science.gov (United States)

    Vanhaudenhuyse, Audrey; Laureys, Steven; Perrin, Fabien

    2008-01-01

    We review the interest of cognitive event-related potentials (ERPs) in comatose, vegetative, or minimally conscious patients. Auditory cognitive ERPs are useful to investigate residual cognitive functions, such as echoic memory (MMN), acoustical and semantic discrimination (P300), and incongruent language detection (N400). While early ERPs (such as the absence of cortical responses on somatosensory-evoked potentials) predict bad outcome, cognitive ERPs (MMN and P300) are indicative of recovery of consciousness. In coma-survivors, cognitive potentials are more frequently obtained when using stimuli that are more ecologic or have an emotional content (such as the patient's own name) than when using classical sine tones.

  6. Spatial auditory attention is modulated by tactile priming.

    Science.gov (United States)

    Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus

    2005-07-01

    Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.

  7. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings.

    Science.gov (United States)

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control.

  8. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings

    Science.gov (United States)

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479

  9. Emotion and attention: event-related brain potential studies.

    Science.gov (United States)

    Schupp, Harald T; Flaisch, Tobias; Stockburger, Jessica; Junghöfer, Markus

    2006-01-01

    Emotional pictures guide selective visual attention. A series of event-related brain potential (ERP) studies is reviewed demonstrating the consistent and robust modulation of specific ERP components by emotional images. Specifically, pictures depicting natural pleasant and unpleasant scenes are associated with an increased early posterior negativity, late positive potential, and sustained positive slow wave compared with neutral contents. These modulations are considered to index different stages of stimulus processing including perceptual encoding, stimulus representation in working memory, and elaborate stimulus evaluation. Furthermore, the review includes a discussion of studies exploring the interaction of motivated attention with passive and active forms of attentional control. Recent research is reviewed exploring the selective processing of emotional cues as a function of stimulus novelty, emotional prime pictures, learned stimulus significance, and in the context of explicit attention tasks. It is concluded that ERP measures are useful to assess the emotion-attention interface at the level of distinct processing stages. Results are discussed within the context of two-stage models of stimulus perception brought out by studies of attention, orienting, and learning.

  10. Mathematical model for space perception to explain auditory horopter curves; Chokaku horopter wo setsumeisuru kukan ichi chikaku model

    Energy Technology Data Exchange (ETDEWEB)

    Okura, M. [Dynax Co., Tokyo (Japan); Maeda, T.; Tachi, S. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering

    1998-10-31

    For binocular visual space, the horizontal line seen as a straight line on the subjective frontoparallel plane does not always agree with the physically straight line, and the shape thereof depends on distance from the observer. This phenomenon is known as a Helmhotz`s horopter. The same phenomenon may occur also in binaural space, which depends on distance to an acoustic source. This paper formulates a scaler addition model that explains auditory horopter by using two items of information: sound pressure and interaural time difference. Furthermore, this model was used to perform simulations on different learning domains, and the following results were obtained. It was verified that the distance dependence of the auditory horopter can be explained by using the above scaler addition model; and difference in horopter shapes among the subjects may be explained by individual difference in learning domains of spatial position recognition. In addition, such an auditory model was shown not to include as short distance as in the learning domain in the auditory horopter model. 21 refs., 6 figs.

  11. The musical centers of the brain: Vladimir E. Larionov (1857-1929) and the functional neuroanatomy of auditory perception.

    Science.gov (United States)

    Triarhou, Lazaros C; Verina, Tatyana

    2016-11-01

    In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Development of Attentional Control of Verbal Auditory Perception from Middle to Late Childhood: Comparisons to Healthy Aging

    Science.gov (United States)

    Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen

    2013-01-01

    Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…

  13. Only low frequency event-related EEG activity is compromised in multiple sclerosis: insights from an independent component clustering analysis.

    Directory of Open Access Journals (Sweden)

    Hanni Kiiski

    Full Text Available Cognitive impairment (CI, often examined with neuropsychological tests such as the Paced Auditory Serial Addition Test (PASAT, affects approximately 65% of multiple sclerosis (MS patients. The P3b event-related potential (ERP, evoked when an infrequent target stimulus is presented, indexes cognitive function and is typically compared across subjects' scalp electroencephalography (EEG data. However, the clustering of independent components (ICs is superior to scalp-based EEG methods because it can accommodate the spatiotemporal overlap inherent in scalp EEG data. Event-related spectral perturbations (ERSPs; event-related mean power spectral changes and inter-trial coherence (ITCs; event-related consistency of spectral phase reveal a more comprehensive overview of EEG activity. Ninety-five subjects (56 MS patients, 39 controls completed visual and auditory two-stimulus P3b event-related potential tasks and the PASAT. MS patients were also divided into CI and non-CI groups (n = 18 in each based on PASAT scores. Data were recorded from 128-scalp EEG channels and 4 IC clusters in the visual, and 5 IC clusters in the auditory, modality were identified. In general, MS patients had significantly reduced ERSP theta power versus controls, and a similar pattern was observed for CI vs. non-CI MS patients. The ITC measures were also significantly different in the theta band for some clusters. The finding that MS patients had reduced P3b task-related theta power in both modalities is a reflection of compromised connectivity, likely due to demyelination, that may have disrupted early processes essential to P3b generation, such as orientating and signal detection. However, for posterior sources, MS patients had a greater decrease in alpha power, normally associated with enhanced cognitive function, which may reflect a compensatory mechanism in response to the compromised early cognitive processing.

  14. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  15. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  16. [Differential effects of attention deficit/hyperactivity disorder subtypes in event-related potentials].

    Science.gov (United States)

    Tamayo-Orrego, Lukas; Osorio Forero, Alejandro; Quintero Giraldo, Lina Paola; Parra Sánchez, José Hernán; Varela, Vilma; Restrepo, Francia

    2015-01-01

    To better understand the neurophysiological substrates in attention deficit/hyperactivity disorder (ADHD), a study was performed on of event-related potentials (ERPs) in Colombian patients with inattentive and combined ADHD. A case-control, cross-sectional study was designed. The sample was composed of 180 subjects between 5 and 15 years of age (mean, 9.25±2.6), from local schools in Manizales. The sample was divided equally in ADHD or control groups and the subjects were paired by age and gender. The diagnosis was made using the DSM-IV-TR criteria, the Conners and WISC-III test, a psychiatric interview (MINIKID), and a medical evaluation. ERPs were recorded in a visual and auditory passive oddball paradigm. Latency and amplitude of N100, N200 and P300 components for common and rare stimuli were used for statistical comparisons. ADHD subjects show differences in the N200 amplitude and P300 latency in the auditory task. The N200 amplitude was reduced in response to visual stimuli. ADHD subjects with combined symptoms show a delayed P300 in response to auditory stimuli, whereas inattentive subjects exhibited differences in the amplitude of N100 and N200. Combined ADHD patients showed longer N100 latency and smaller N200-P300 amplitude compared to inattentive ADHD subjects. The results show differences in the event-related potentials between combined and inattentive ADHD subjects. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  17. Neural dynamics underlying attentional orienting to auditory representations in short-term memory.

    Science.gov (United States)

    Backer, Kristina C; Binns, Malcolm A; Alain, Claude

    2015-01-21

    Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.

  18. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  19. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  20. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  1. ERP evidence that auditory-visual speech facilitates working memory in younger and older adults.

    Science.gov (United States)

    Frtusova, Jana B; Winneke, Axel H; Phillips, Natalie A

    2013-06-01

    Auditory-visual (AV) speech enhances speech perception and facilitates auditory processing, as measured by event-related brain potentials (ERPs). Considering a perspective of shared resources between perceptual and cognitive processes, facilitated speech perception may render more resources available for higher-order functions. This study examined whether AV speech facilitation leads to better working memory (WM) performance in 23 younger and 20 older adults. Participants completed an n-back task (0- to 3-back) under visual-only (V-only), auditory-only (A-only), and AV conditions. The results showed faster responses across all memory loads and improved accuracy in the most demanding conditions (2- and 3-back) during AV compared with unisensory conditions. Older adults benefited from the AV presentation to the same extent as younger adults. WM performance of older adults during the AV presentation did not differ from that of younger adults in the A-only condition, suggesting that an AV presentation can help to counteract some of the age-related WM decline. The ERPs showed a decrease in the auditory N1 amplitude during the AV compared with A-only presentation in older adults, suggesting that the facilitation of perceptual processing becomes especially beneficial with aging. Additionally, the N1 occurred earlier in the AV than in the A-only condition for both age groups. These AV-induced modulations of auditory processing correlated with improvement in certain behavioral and ERP measures of WM. These results support an integrated model between perception and cognition, and suggest that processing speech under AV conditions enhances WM performance of both younger and older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Cranial pneumatization and auditory perceptions of the oviraptorid dinosaur Conchoraptor gracilis (Theropoda, Maniraptora) from the Late Cretaceous of Mongolia

    Czech Academy of Sciences Publication Activity Database

    Kundrát, M.; Janáček, Jiří

    2007-01-01

    Roč. 94, č. 9 (2007), s. 769-778 ISSN 0028-1042 Institutional research plan: CEZ:AV0Z50110509 Keywords : Conchoraptor * neurocranium * acoustic perception Subject RIV: EG - Zoology Impact factor: 1.955, year: 2007

  3. Intentional switching in auditory selective attention: Exploring age-related effects in a spatial setup requiring speech perception.

    Science.gov (United States)

    Oberem, Josefa; Koch, Iring; Fels, Janina

    2017-06-01

    Using a binaural-listening paradigm, age-related differences in the ability to intentionally switch auditory selective attention between two speakers, defined by their spatial location, were examined. Therefore 40 normal-hearing participants (20 young, Ø 24.8years; 20 older Ø 67.8years) were tested. The spatial reproduction of stimuli was provided by headphones using head-related-transfer-functions of an artificial head. Spoken number words of two speakers were presented simultaneously to participants from two out of eight locations on the horizontal plane. Guided by a visual cue indicating the spatial location of the target speaker, the participants were asked to categorize the target's number word into smaller vs. greater than five while ignoring the distractor's speech. Results showed significantly higher reaction times and error rates for older participants. The relative influence of the spatial switch of the target-speaker (switch or repetition of speaker's direction in space) was identical across age groups. Congruency effects (stimuli spoken by target and distractor may evoke the same answer or different answers) were increased for older participants and depend on the target's position. Results suggest that the ability to intentionally switch auditory attention to a new cued location was unimpaired whereas it was generally harder for older participants to suppress processing the distractor's speech. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Dysfunctions of visual and auditory Gestalt perception (amusia) after stroke : Behavioral correlates and functional magnetic resonance imaging

    OpenAIRE

    Rosemann, Stephanie Heike

    2016-01-01

    Music is a special and unique part of human nature. Not only actively playing (making music in a group or alone) but also passive listening to music involves a richness of processes to make music the ideal tool to investigate how the human brain works. Acquired amusia denotes the impaired perception of melodies, rhythms, and the associated disability to enjoy music which can occur after a stroke. Many amusia patients also show deficits in visual perception, language, memory, and attention. He...

  5. Blind Source Separation of Event-Related EEG/MEG.

    Science.gov (United States)

    Metsomaa, Johanna; Sarvas, Jukka; Ilmoniemi, Risto Juhani

    2017-09-01

    Blind source separation (BSS) can be used to decompose complex electroencephalography (EEG) or magnetoencephalography data into simpler components based on statistical assumptions without using a physical model. Applications include brain-computer interfaces, artifact removal, and identifying parallel neural processes. We wish to address the issue of applying BSS to event-related responses, which is challenging because of nonstationary data. We introduce a new BSS approach called momentary-uncorrelated component analysis (MUCA), which is tailored for event-related multitrial data. The method is based on approximate joint diagonalization of multiple covariance matrices estimated from the data at separate latencies. We further show how to extend the methodology for autocovariance matrices and how to apply BSS methods suitable for piecewise stationary data to event-related responses. We compared several BSS approaches by using simulated EEG as well as measured somatosensory and transcranial magnetic stimulation (TMS) evoked EEG. Among the compared methods, MUCA was the most tolerant one to noise, TMS artifacts, and other challenges in the data. With measured somatosensory data, over half of the estimated components were found to be similar by MUCA and independent component analysis. MUCA was also stable when tested with several input datasets. MUCA is based on simple assumptions, and the results suggest that MUCA is robust with nonideal data. Event-related responses and BSS are valuable and popular tools in neuroscience. Correctly designed BSS is an efficient way of identifying artifactual and neural processes from nonstationary event-related data.

  6. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  7. Spatial and Semantic Processing between Audition and Vision: An Event-Related Potential Study

    Directory of Open Access Journals (Sweden)

    Xiaoxi Chen

    2011-10-01

    Full Text Available Using a crossmodal priming paradigm, this study investigated how the brain bound the spatial and semantic features in multisensory processing. The visual stimuli (pictures of animals were presented after the auditory stimuli (sounds of animals, and the stimuli from different modalities may match spatially (or semantically or not. Participants were required to detect the head orientation of the visual target (an oddball paradigm. The event-related potentials (ERPs to the visual stimuli was enhanced by spatial attention (150–170 ms irrespectively of semantic information. The early crossmodal attention effect for the visual stimuli was more negative in the spatial-congruent condition than in the spatial-incongruent condition. By contrast, the later effects of spatial ERPs were significant only for the semantic- congruent condition (250–300 ms. These findings indicated that spatial attention modulated early visual processing, and semantic and spatial features were simultaneously used to orient attention and modulate later processing stages.

  8. Event-related brain potentials reflect traces of echoic memory in humans.

    Science.gov (United States)

    Winkler, I; Reinikainen, K; Näätänen, R

    1993-04-01

    In sequences of identical auditory stimuli, infrequent deviant stimuli elicit an event-related brain potential component called mismatch negativity (MMN). MMN is presumed to reflect the existence of a memory trace of the frequent stimulus at the moment of presentation of the infrequent stimulus. This hypothesis was tested by applying the recognition-masking paradigm of cognitive psychology. In this paradigm, a masking sound presented shortly before or after a test stimulus diminishes the recognition memory of this stimulus, the more so the shorter the interval between the test and masking stimuli. This interval was varied in the present study. It was found that the MMN amplitude strongly correlated with the subject's ability to discriminate between frequent and infrequent stimuli. This result strongly suggests that MMN provides a measure for a trace of sensory memory, and further, that with MMN, this memory can be studied without performance-related distortions.

  9. Hostile attribution biases for relationally provocative situations and event-related potentials.

    Science.gov (United States)

    Godleski, Stephanie A; Ostrov, Jamie M; Houston, Rebecca J; Schlienz, Nicolas J

    2010-04-01

    This exploratory study investigates how hostile attribution biases for relationally provocative situations may be related to neurocognitive processing using the P300 event-related potential. Participants were 112 (45 women) emerging adults enrolled in a large, public university in upstate New York. Participants completed self-report measures on relational aggression and hostile attribution biases and performed an auditory perseveration task to elicit the P300. It was found that hostile attribution biases for relational provocation situations was associated with a larger P300 amplitude above and beyond the role of hostile attribution biases for instrumental situations, relational aggression, and gender. Larger P300 amplitude is interpreted to reflect greater allocation of cognitive resources or enhanced "attending" to salient stimuli. Implications for methodological approaches to studying aggression and hostile attribution biases and for theory are discussed, as well as implications for the fields of developmental psychology and psychopathology. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Intracerebral Event-related Potentials to Subthreshold Target Stimuli

    Czech Academy of Sciences Publication Activity Database

    Brázdil, M.; Rektor, I.; Daniel, P.; Dufek, M.; Jurák, Pavel

    2001-01-01

    Roč. 112, č. 4 (2001), s. 650-661 ISSN 1388-2457 R&D Projects: GA ČR GA309/98/0490 Institutional research plan: CEZ:AV0Z2065902 Keywords : event-related potentials * intracerebral recordings * oddball paradigm Subject RIV: FA - Cardiovascular Diseases incl. Cardiotharic Surgery Impact factor: 1.922, year: 2001

  11. The Role of Sensory Perception, Emotionality and "Lifeworld" in Auditory Word Processing: Evidence from Congenital Blindness and Synesthesia

    Science.gov (United States)

    Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina

    2017-01-01

    Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the…

  12. Acute low-level alcohol consumption reduces phase locking of event-related oscillations in rodents.

    Science.gov (United States)

    Amodeo, Leslie R; Wills, Derek N; Ehlers, Cindy L

    2017-07-14

    Event-related oscillations (EROs) are rhythmic changes that are evoked by a sensory and/or cognitive stimulus that can influence the dynamics of the EEG. EROs are defined by the decomposition of the EEG signal into magnitude (energy) and phase information and can be elicited in both humans and animals. EROs have been linked to several relevant genes associated with ethanol dependence phenotypes in humans and are altered in selectively bred alcohol-preferring rats. However, pharmacological studies are only beginning to emerge investigating the impact low intoxicating doses of ethanol can have on event-related neural oscillations. The main goal of this study was to investigate the effects of low levels of voluntary consumption of ethanol, in rats, on phase locking of EROs in order to give further insight into the acute intoxicating effects of ethanol on the brain. To this end, we allow rats to self-administer unsweetened 20% ethanol over 15 intermittent sessions. This method results in a stable low-dose consumption of ethanol. Using an auditory event-related potential "oddball" paradigm, we investigated the effects of alcohol on the phase variability of EROs from electrodes implanted into the frontal cortex, dorsal hippocampus, and amygdala. We found that intermittent ethanol self-administration was sufficient to produce a significant reduction in overall intraregional synchrony across all targeted regions. These data suggest that phase locking of EROs within brain regions known to be impacted by alcohol may represent a sensitive biomarker of low levels of alcohol intoxication. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Timely event-related synchronization fading and phase de-locking and their defects in migraine.

    Science.gov (United States)

    Yum, Myung-Kul; Moon, Jin-Hwa; Kang, Joong Koo; Kwon, Oh-Young; Park, Ki-Jong; Shon, Young-Min; Lee, Il Keun; Jung, Ki-Young

    2014-07-01

    To investigate the characteristics of event-related synchronization (ERS) fading and phase de-locking of alpha waves during passive auditory stimulation (PAS) in the migraine patients. The subjects were 16 adult women with migraine and 16 normal controls. Electroencephalographic (EEG) data obtained during PAS with standard (SS) and deviant stimuli (DS) were used. Alpha ERS fading, the phase locking index (PLI) and de-locking index (DLI) were evaluated from the 10 Hz complex Morlet wavelet components at 100 ms (t100) and 300 ms (t300) after PAS. At t100, significant ERS was found with SS and DS in the migraineurs and controls (P=0.000). At t300 in the controls, ERS faded to zero for DS while in the migraineurs there was no fading for DS. In both groups the PLI for SS and DS was significantly reduced, i.e. de-locked, at t300 compared to t100 (P=0.000). In the migraineurs, the DLI for DS was significantly lower than in the controls (P=0.003). The alpha ERS fading and phase de-locking are defective in migraineurs during passive auditory cognitive processing. The defects in timely alpha ERS fading and in de-locking may play a role in the different attention processing in migraine patients. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Event-related potentials and secondary task performance during simulated driving.

    Science.gov (United States)

    Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L

    2008-01-01

    Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.

  15. Event-Related Potential Patterns Associated with Hyperarousal in Gulf War Illness Syndrome Groups

    Science.gov (United States)

    Tillman, Gail D.; Calley, Clifford S.; Green, Timothy A.; Buhl, Virginia I.; Biggs, Melanie M.; Spence, Jeffrey S.; Briggs, Richard W.; Haley, Robert W.; Hart, John; Kraut, Michael A.

    2012-01-01

    An exaggerated response to emotional stimuli is one of several symptoms widely reported by veterans of the 1991 Persian Gulf War. Many have attributed these symptoms to post-war stress; others have attributed the symptoms to deployment-related exposures and associated damage to cholinergic, dopaminergic, and white matter systems. We collected event-related potential (ERP) data from 20 veterans meeting Haley criteria for Gulf War Syndromes 1–3 and from 8 matched Gulf War veteran controls, who were deployed but not symptomatic, while they performed an auditory three-condition oddball task with gunshot and lion roar sounds as the distractor stimuli. Reports of hyperarousal from the ill veterans were significantly greater than those from the control veterans; different ERP profiles emerged to account for their hyperarousability. Syndromes 2 and 3, who have previously shown brainstem abnormalities, show significantly stronger auditory P1 amplitudes, purported to indicate compromised cholinergic inhibitory gating in the reticular activating system. Syndromes 1 and 2, who have previously shown basal ganglia dysfunction, show significantly weaker P3a response to distractor stimuli, purported to indicate dysfunction of the dopaminergic contribution to their ability to inhibit distraction by irrelevant stimuli. All three syndrome groups showed an attenuated P3b to target stimuli, which could be secondary to both cholinergic and dopaminergic contributions or disruption of white matter integrity. PMID:22691951

  16. Cognitive deficits following exposure to pneumococcal meningitis: an event-related potential study

    Directory of Open Access Journals (Sweden)

    Kihara Michael

    2012-03-01

    Full Text Available Abstract Background Pneumococcal meningitis (PM is a severe and life-threatening disease that is associated with cognitive impairment including learning difficulties, cognitive slowness, short-term memory deficits and poor academic performance. There are limited data on cognitive outcomes following exposure to PM from Africa mainly due to lack of culturally appropriate tools. We report cognitive processes of exposed children as measured by auditory and visual event-related potentials. Methods Sixty-five children (32 male, mean 8.4 years, SD 3.0 years aged between 4-15 years with a history of PM and an age-matched control group of 93 children (46 male; mean 8.4 years, SD 2.7 years were recruited from a well-demarcated study area in Kilifi. In the present study, both baseline to peak and peak-to-peak amplitude differences are reported. Results Children with a history of pneumococcal meningitis had significantly longer auditory P1 and P3a latencies and smaller P1 amplitudes compared to unexposed children. In the visual paradigm, children with PM seemingly lacked a novelty P3a component around 350 ms where control children had a maximum, and showed a lack of stimulus differentiation at Nc. Further, children with exposure to PM had smaller peak to peak amplitude (N2-P1 compared to unexposed children. Conclusion The results suggest that children with a history of PM process novelty differently than do unexposed children, with slower latencies and reduced or absent components. This pattern suggests poorer auditory attention and/or cognitive slowness and poorer visual attention orienting, possibly due to disruption in the functions of the lateral prefrontal and superior temporal cortices. ERPs may be useful for assessment of the development of perceptual-cognitive functions in post brain-injury in African children by providing an alternate way of assessing cognitive development in patient groups for whom more typical standardized neuropsychological

  17. P300 Event-Related Potentials in Children with Dyslexia

    Science.gov (United States)

    Papagiannopoulou, Eleni A.; Lagopoulos, Jim

    2017-01-01

    To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in…

  18. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children

    Directory of Open Access Journals (Sweden)

    Yifang Wang

    2017-12-01

    Full Text Available This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.

  19. [Event-related potentials P₃₀₀ with memory function and psychopathology in first-episode paranoid schizophrenia].

    Science.gov (United States)

    Liu, Wei-bo; Chen, Qiao-zhen; Yin, Hou-min; Zheng, Lei-lei; Yu, Shao-hua; Chen, Yi-ping; Li, Hui-chun

    2011-11-01

    To investigate the variability of event-related potentials P(300) and the relationship with memory function/psychopathology in patients with first-episode paranoid schizophrenia. Thirty patients with first-episode paranoid schizophrenia (patient group) and twenty health subjects (control group) were enrolled in the study. The auditory event-related potentials P₃₀₀ at the scalp electrodes Cz, Pz and Wechsler Memory Scale (WMS) were examined in both groups, Positive And Negative Syndrome Scale (PANSS) was evaluated in patient group. In comparison with control group, patients had longer latency of P₃₀₀ [(390.6 ± 47.6)ms at Cz and (393.3 ± 50.1)ms at Pz] (Pparanoid schizophrenia has memory deficit, which can be evaluated comprehensively by P₃₀₀ and WMS. The longer latency of P₃₀₀ might be associated with the increased severity of first-episode paranoid schizophrenia.

  20. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  1. Measurement of event-related potentials and placebo

    Directory of Open Access Journals (Sweden)

    Sovilj Platon

    2014-01-01

    Full Text Available ERP is common abbreviation for event-related brain potentials, which are measured and used in clinical practice as well as in research practice. Contemporary studies of placebo effect are often based on functional neuromagnetic resonance (fMRI, positron emission tomography (PET, and event related potentials (ERP. This paper considers an ERP instrumentation system used in experimental researches of placebo effect. This instrumentation system can be divided into four modules: electrodes and cables, conditioning module, digital measurement module, and PC module for stimulations, presentations, acquisition and data processing. The experimental oddball paradigm is supported by the software of the instrumentation. [Projekat Ministarstva nauke Republike Srbije, br. TR32019 and Provincial Secretariat for Science and Technological Development of Autonomous Province of Vojvodina (Republic of Serbia under research grant No. 114-451-2723

  2. Emotion and attention : Event-related brain potential studies

    OpenAIRE

    Schupp, Harald Thomas; Flaisch, Tobias; Stockburger, Jessica; Junghöfer, Markus

    2006-01-01

    Emotional pictures guide selective visual attention. A series of event-related brain potential (ERP) studies is reviewed demonstrating the consistent and robust modulation of specific ERP components by emotional images. Specifically, pictures depicting natural pleasant and unpleasant scenes are associated with an increased early posterior negativity, late positive potential, and sustained positive slow wave compared with neutral contents. These modulations are considered to index different st...

  3. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  4. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  5. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study.

    Science.gov (United States)

    Zhishuai, Jin; Hong, Liu; Daxing, Wu; Pin, Zhang; Xuejing, Lu

    2017-01-01

    Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG) while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs) recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC) in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  7. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study

    Directory of Open Access Journals (Sweden)

    Jin Zhishuai

    2017-01-01

    Full Text Available Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  8. Bilingualism and increased attention to speech: Evidence from event-related potentials.

    Science.gov (United States)

    Kuipers, Jan Rouke; Thierry, Guillaume

    2015-10-01

    A number of studies have shown that from an early age, bilinguals outperform their monolingual peers on executive control tasks. We previously found that bilingual children and adults also display greater attention to unexpected language switches within speech. Here, we investigated the effect of a bilingual upbringing on speech perception in one language. We recorded monolingual and bilingual toddlers' event-related potentials (ERPs) to spoken words preceded by pictures. Words matching the picture prime elicited an early frontal positivity in bilingual participants only, whereas later ERP amplitudes associated with semantic processing did not differ between groups. These results add to the growing body of evidence that bilingualism increases overall attention during speech perception whilst semantic integration is unaffected. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  10. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  11. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  12. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  13. Neurodevelopment of Conflict Adaptation: Evidence From Event-Related Potentials

    DEFF Research Database (Denmark)

    Liu, Xiuying; Liu, Tongran; Shangguan, Fangfang

    2018-01-01

    Conflict adaptation is key in how children self-regulate and assert cognitive control in a given situation compared with a previous experience. In the current study, we analyzed event-related potentials (ERPs) to identify age-related differences in conflict adaptation. Participants of different a...... to better assimilate and accommodate potential environmental conflicts. The results may also indicate that the development of conflict adaption is affected by the specific characteristic of the different types of conflict.......Conflict adaptation is key in how children self-regulate and assert cognitive control in a given situation compared with a previous experience. In the current study, we analyzed event-related potentials (ERPs) to identify age-related differences in conflict adaptation. Participants of different...... ages (5-year-old children, 10-year-old children, and adults) were subjected to a stimulus-stimulus (S-S) conflict control task (the flanker task) and a stimulus-response (S-R) conflict control task (the Simon task). The behavioral results revealed that all age groups had reliable conflict adaptation...

  14. Deficient multisensory integration in schizophrenia: an event-related potential study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  15. Multivariate evaluation of brain function by measuring regional cerebral blood flow and event-related potentials

    Energy Technology Data Exchange (ETDEWEB)

    Koga, Yoshihiko; Mochida, Masahiko; Shutara, Yoshikazu; Nakagawa, Kazumi [Kyorin Univ., Mitaka, Tokyo (Japan). School of Medicine; Nagata, Ken

    1998-07-01

    To measure the effect of events on human cognitive function, effects of odors by measurement regional cerebral blood flow (rCBF) and P300 were evaluated during the auditory odd-ball exercise. PET showed the increase in rCBF on the right hemisphere of the brain by coffee aroma. rCBF was measured by PET in 9 of right-handed healthy adults men, and P300 was by event-related potential (ERP) in each sex of 20 right-handed healthy adults. ERP showed the difference of the P300 amplitude between men and women, and showed the tendency, by odors except the lavender oil, that women had higher in the P300 amplitude than men. These results suggest the presence of effects on the cognitive function through emotional actions. Next, the relationship between rCBF and ERP were evaluated. The subjects were 9 of the right-handed healthy adults (average: 25.6{+-}3.4 years old). rCBF by PET and P300 amplitude by ERP were simultaneously recorded during the auditory odd-ball exercise using the tone-burst method (2 kHz of the low frequency aimed stimuli and 1 kHz of the high frequency non-aimed stimuli). The rCBF value was the highest at the transverse gyrus of Heschl and the lowest at the piriform cortex among 24 regions of interest (ROI) from both sides. The difference of P300 peak latent time among ROI was almost the same. The brain waves from Cz and Pz were similar and the average amplitude was highest at Pz. We found the high correlation in the right piriform cortex (Fz), and right (Fz, Cz) and left (Cz, Pz) transverse gyrus of Heschl between the P300 amplitude and rCBF. (K.H.)

  16. Bilateral theta-burst magnetic stimulation influence on event-related brain potentials.

    Science.gov (United States)

    Pinto, Nuno; Duarte, Marta; Gonçalves, Helena; Silva, Ricardo; Gama, Jorge; Pato, Maria Vaz

    2018-01-01

    Theta-burst stimulation (TBS) can be a non-invasive technique to modulate cognitive functions, with promising therapeutic potential, but with some contradictory results. Event related potentials are used as a marker of brain deterioration and can be used to evaluate TBS-related cognitive performance, but its use remains scant. This study aimed to study bilateral inhibitory and excitatory TBS effects upon neurocognitive performance of young healthy volunteers, using the auditory P300' results. Using a double-blind sham-controlled study, 51 healthy volunteers were randomly assigned to five different groups, two submitted to either excitatory (iTBS) or inhibitory (cTBS) stimulation over the left dorsolateral pre-frontal cortex (DLPFC), two other actively stimulated the right DLPFC and finally a sham stimulation group. An oddball based auditory P300 was performed just before a single session of iTBS, cTBS or sham stimulation and repeated immediately after. P300 mean latency comparison between the pre- and post-TBS stimulation stages revealed significantly faster post stimulation latencies only when iTBS was performed on the left hemisphere (p = 0.003). Right and left hemisphere cTBS significantly delayed P300 latency (right p = 0.026; left p = 0.000). Multiple comparisons for N200 showed slower latencies after iTBS over the right hemisphere. No significant difference was found in amplitude variation. TBS appears to effectively influence neural networking involved in P300 formation, but effects seem distinct for iTBS vs cTBS and for the right or the left hemisphere. P300 evoked potentials can be an effective and practical tool to evaluate transcranial magnetic stimulation related outcomes.

  17. Multivariate evaluation of brain function by measuring regional cerebral blood flow and event-related potentials

    International Nuclear Information System (INIS)

    Koga, Yoshihiko; Mochida, Masahiko; Shutara, Yoshikazu; Nakagawa, Kazumi; Nagata, Ken

    1998-01-01

    To measure the effect of events on human cognitive function, effects of odors by measurement regional cerebral blood flow (rCBF) and P300 were evaluated during the auditory odd-ball exercise. PET showed the increase in rCBF on the right hemisphere of the brain by coffee aroma. rCBF was measured by PET in 9 of right-handed healthy adults men, and P300 was by event-related potential (ERP) in each sex of 20 right-handed healthy adults. ERP showed the difference of the P300 amplitude between men and women, and showed the tendency, by odors except the lavender oil, that women had higher in the P300 amplitude than men. These results suggest the presence of effects on the cognitive function through emotional actions. Next, the relationship between rCBF and ERP were evaluated. The subjects were 9 of the right-handed healthy adults (average: 25.6±3.4 years old). rCBF by PET and P300 amplitude by ERP were simultaneously recorded during the auditory odd-ball exercise using the tone-burst method (2 kHz of the low frequency aimed stimuli and 1 kHz of the high frequency non-aimed stimuli). The rCBF value was the highest at the transverse gyrus of Heschl and the lowest at the piriform cortex among 24 regions of interest (ROI) from both sides. The difference of P300 peak latent time among ROI was almost the same. The brain waves from Cz and Pz were similar and the average amplitude was highest at Pz. We found the high correlation in the right piriform cortex (Fz), and right (Fz, Cz) and left (Cz, Pz) transverse gyrus of Heschl between the P300 amplitude and rCBF. (K.H.)

  18. A Novel Functional Magnetic Resonance Imaging Paradigm for the Preoperative Assessment of Auditory Perception in a Musician Undergoing Temporal Lobe Surgery.

    Science.gov (United States)

    Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J

    2018-03-01

    Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Neural Correlates of Selective Attention With Hearing Aid Use Followed by ReadMyQuips Auditory Training Program.

    Science.gov (United States)

    Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey

    The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective

  20. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  1. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  2. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  3. Event-Related Potential Measures of Attention Capture in Adolescent Inpatients With Acute Suicidal Behavior.

    Science.gov (United States)

    Tavakoli, Paniz; Boafo, Addo; Dale, Allyson; Robillard, Rebecca; Greenham, Stephanie L; Campbell, Kenneth

    2018-01-01

    Impaired executive functions, modulated by the frontal lobes, have been suggested to be associated with suicidal behavior. The present study examines one of these executive functions, attentional control, maintaining attention to the task-at-hand. A group of inpatient adolescents with acute suicidal behavior and healthy controls were studied using a passively presented auditory optimal paradigm. This "optimal" paradigm consisted of a series of frequently presented homogenous pure tone "standards" and different "deviants," constructed by changing one or more features of the standard. The optimal paradigm has been shown to be a more time-efficient replacement to the traditional oddball paradigm, which makes it suitable for use in clinical populations. The extent of processing of these "to-be-ignored" auditory stimuli was measured by recording event-related potentials (ERPs). The P3a ERP component is thought to reflect processes associated with the capturing of attention. Rare and novel stimuli may result in an executive decision to switch attention away from the current cognitive task and toward a probe of the potentially more relevant "interrupting" auditory input. On the other hand, stimuli that are quite similar to the standard should not elicit P3a. The P3a has been shown to be larger in immature brains in early compared to later adolescence. An overall enhanced P3a was observed in the suicidal group. The P3a was larger in this group for both the environmental sound and white noise deviants, although only the environmental sound P3a attained significance. Other deviants representing only a small change from the standard did not elicit a P3a in healthy controls. They did elicit a small P3a in the suicidal group. These findings suggest a lowered threshold for the triggering of the involuntary switch of attention in these patients, which may play a role in their reported distractibility. The enhanced P3a is also suggestive of an immature frontal central executive

  4. Event-Related Potential Measures of Attention Capture in Adolescent Inpatients With Acute Suicidal Behavior

    Directory of Open Access Journals (Sweden)

    Paniz Tavakoli

    2018-03-01

    Full Text Available Impaired executive functions, modulated by the frontal lobes, have been suggested to be associated with suicidal behavior. The present study examines one of these executive functions, attentional control, maintaining attention to the task-at-hand. A group of inpatient adolescents with acute suicidal behavior and healthy controls were studied using a passively presented auditory optimal paradigm. This “optimal” paradigm consisted of a series of frequently presented homogenous pure tone “standards” and different “deviants,” constructed by changing one or more features of the standard. The optimal paradigm has been shown to be a more time-efficient replacement to the traditional oddball paradigm, which makes it suitable for use in clinical populations. The extent of processing of these “to-be-ignored” auditory stimuli was measured by recording event-related potentials (ERPs. The P3a ERP component is thought to reflect processes associated with the capturing of attention. Rare and novel stimuli may result in an executive decision to switch attention away from the current cognitive task and toward a probe of the potentially more relevant “interrupting” auditory input. On the other hand, stimuli that are quite similar to the standard should not elicit P3a. The P3a has been shown to be larger in immature brains in early compared to later adolescence. An overall enhanced P3a was observed in the suicidal group. The P3a was larger in this group for both the environmental sound and white noise deviants, although only the environmental sound P3a attained significance. Other deviants representing only a small change from the standard did not elicit a P3a in healthy controls. They did elicit a small P3a in the suicidal group. These findings suggest a lowered threshold for the triggering of the involuntary switch of attention in these patients, which may play a role in their reported distractibility. The enhanced P3a is also suggestive of

  5. Event-Related Potentials and Emotion Processing in Child Psychopathology

    Directory of Open Access Journals (Sweden)

    Georgia eChronaki

    2016-04-01

    Full Text Available In recent years there has been increasing interest in the neural mechanisms underlying altered emotional processes in children and adolescents with psychopathology. This review provides a brief overview of the most up-to-date findings in the field of Event-Related Potentials (ERPs to facial and vocal emotional expressions in the most common child psychopathological conditions. In regards to externalising behaviour (i.e. ADHD, CD, ERP studies show enhanced early components to anger, reflecting enhanced sensory processing, followed by reductions in later components to anger, reflecting reduced cognitive-evaluative processing. In regards to internalising behaviour, research supports models of increased processing of threat stimuli especially at later more elaborate and effortful stages. Finally, in autism spectrum disorders abnormalities have been observed at early visual-perceptual stages of processing. An affective neuroscience framework for understanding child psychopathology can be valuable in elucidating underlying mechanisms and inform preventive intervention.

  6. Comparison of event related potentials with and without hypnagogic imagery.

    Science.gov (United States)

    Michida, N; Hayashi, M; Hori, T

    1998-04-01

    It is hypothesized that when hypnagogic imagery occurs, an appropriate attention may allocate to the imagery, resulting in the allocation of attention to the external tone stimuli being diminished. N3 amplitude of event related potentials (ERP) obtained a significant difference between the conditions with and without imagery. Arousal level of behavior and electroencephalography were not different between the conditions, therefore it is interpreted that the decrease of the N3 amplitude during imagining reflects the diminution of the allocation of attention to the external tone stimuli. Another late component of ERP, P3, did not make clear peaks in this study despite a large time constant (tau=3.2 s) used for EEG records.

  7. Probabilistic delay differential equation modeling of event-related potentials.

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger

    2016-08-01

    "Dynamic causal models" (DCMs) are a promising approach in the analysis of functional neuroimaging data due to their biophysical interpretability and their consolidation of functional-segregative and functional-integrative propositions. In this theoretical note we are concerned with the DCM framework for electroencephalographically recorded event-related potentials (ERP-DCM). Intuitively, ERP-DCM combines deterministic dynamical neural mass models with dipole-based EEG forward models to describe the event-related scalp potential time-series over the entire electrode space. Since its inception, ERP-DCM has been successfully employed to capture the neural underpinnings of a wide range of neurocognitive phenomena. However, in spite of its empirical popularity, the technical literature on ERP-DCM remains somewhat patchy. A number of previous communications have detailed certain aspects of the approach, but no unified and coherent documentation exists. With this technical note, we aim to close this gap and to increase the technical accessibility of ERP-DCM. Specifically, this note makes the following novel contributions: firstly, we provide a unified and coherent review of the mathematical machinery of the latent and forward models constituting ERP-DCM by formulating the approach as a probabilistic latent delay differential equation model. Secondly, we emphasize the probabilistic nature of the model and its variational Bayesian inversion scheme by explicitly deriving the variational free energy function in terms of both the likelihood expectation and variance parameters. Thirdly, we detail and validate the estimation of the model with a special focus on the explicit form of the variational free energy function and introduce a conventional nonlinear optimization scheme for its maximization. Finally, we identify and discuss a number of computational issues which may be addressed in the future development of the approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. The development of involuntary and voluntary attention from childhood to adulthood: a combined behavioral and event-related potential study.

    Science.gov (United States)

    Wetzel, Nicole; Widmann, Andreas; Berti, Stefan; Schröger, Erich

    2006-10-01

    This study investigated auditory involuntary and voluntary attention in children aged 6-8, 10-12 and young adults. The strength of distracting stimuli (20% and 5% pitch changes) and the amount of allocation of attention were varied. In an auditory distraction paradigm event-related potentials (ERPs) and behavioral data were measured from subjects either performing a sound duration discrimination task or watching a silent video. Pitch changed sounds caused prolonged reaction times and decreased hit rates in all age groups. Larger distractors (20%) caused stronger distraction in children, but not in adults. The amplitudes of mismatch negativity (MMN), P3a, and reorienting negativity (RON) were modulated by age and by voluntary attention. P3a was additionally affected by distractor strength. Maturational changes were also observed in the amplitudes of P1 (decreasing with age) and N1 (increasing with age). P2-modulation by voluntary attention was opposite in young children and adults. Results suggest quantitative and qualitative changes in auditory voluntary and involuntary attention and distraction during development. The processing steps involved in distraction (pre-attentive change detection, attention switch, reorienting) are functional in children aged 6-8 but reveal characteristic differences to those of young adults. In general, distractibility as indicated by behavioral and ERP measures decreases from childhood to adulthood. Behavioral and ERP markers for different processing stages involved in voluntary and involuntary attention reveal characteristic developmental changes from childhood to young adulthood.

  9. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  10. Auditory processing in autism spectrum disorder

    DEFF Research Database (Denmark)

    Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher

    2017-01-01

    Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017, 10: 1857–1865....

  11. Feature conjunctions and auditory sensory memory.

    Science.gov (United States)

    Sussman, E; Gomes, H; Nousak, J M; Ritter, W; Vaughan, H G

    1998-05-18

    This study sought to obtain additional evidence that transient auditory memory stores information about conjunctions of features on an automatic basis. The mismatch negativity of event-related potentials was employed because its operations are based on information that is stored in transient auditory memory. The mismatch negativity was found to be elicited by a tone that differed from standard tones in a combination of its perceived location and frequency. The result lends further support to the hypothesis that the system upon which the mismatch negativity relies processes stimuli in an holistic manner. Copyright 1998 Elsevier Science B.V.

  12. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  13. Transitional Probabilities Are Prioritized over Stimulus/Pattern Probabilities in Auditory Deviance Detection: Memory Basis for Predictive Sound Processing.

    Science.gov (United States)

    Mittag, Maria; Takegata, Rika; Winkler, István

    2016-09-14

    Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our

  14. Association between waking electroencephalography and cognitive event-related potentials in patients with obstructive sleep apnea.

    Science.gov (United States)

    Baril, Andrée-Ann; Gagnon, Katia; Gagnon, Jean-François; Montplaisir, Jacques; Gosselin, Nadia

    2013-07-01

    Sleepiness, cognitive deficits, abnormal event-related potentials (ERP), and slowing of the waking electroencephalography (EEG) activity have been reported in patients with obstructive sleep apnea (OSA). Our study aimed at evaluating if an association exists between the severity of ERP abnormalities and EEG slowing to better understand cerebral dysfunctions in OSA. Twelve OSA patients and 12 age-matched controls underwent an overnight polysomnographic recording, an EEG recording of 10 min of wakefulness, and an auditory ERP protocol known to specifically recruit attention. P300 and P3a ERP components were measured as well as the spectral power in each frequency band of the waking EEG. Pearson product moment correlations were used to measure associations between ERP characteristics and EEG spectral power in OSA patients and control subjects. A positive correlation between the late P300 amplitude and θ power in the occipital region was observed in OSA subjects (P<.01). A positive correlation was also found between P3a amplitude and β1 power in central region in OSA subjects (P<.01). No correlation was observed for control subjects. ERP abnormalities observed in an attention task are associated with a slowing of the waking EEG recorded at rest in OSA. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Can echoic memory store two traces simultaneously? A study of event-related brain potentials.

    Science.gov (United States)

    Winkler, I; Paavilainen, P; Näätänen, R

    1992-05-01

    The mismatch negativity, a component of the event-related brain potential elicited by infrequent deviants in sequences of auditory stimuli, is presumably generated by an automatic mismatch process in a mechanism that compares the current stimulus to the trace of the previous one. The present study addressed the possible simultaneous existence of two such traces. Two equiprobable (45% each) frequent stimuli ("standards"), one of 600 Hz and the other of 700 Hz, were presented together with an infrequent (10%), "deviant" stimulus which was of different frequency in different blocks. These deviants elicited a mismatch negativity, though a smaller one than that obtained in corresponding blocks with only one standard stimulus. Two aspects of the present results from the blocks with two standard stimuli implicate two parallel stimulus traces in these blocks: 1) deviants elicited a mismatch negativity (MMN) of approximately the same amplitude when preceded by sequences of four identical standards as when preceded by sequences of four stimuli containing both standards; 2) in contrast to the one-standard condition, the magnitude of stimulus deviance did not affect the MMN component elicited by the different deviants.

  16. Cognitive processing in non-communicative patients: what can event-related potentials tell us?

    Directory of Open Access Journals (Sweden)

    Zulay Rosario Lugo

    2016-11-01

    Full Text Available Event-related potentials (ERP have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS. Eleven chronic LIS patients and ten healthy subjects (HS listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds and then in an active condition (counting the deviant tones. Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and 5 of 7 in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.

  17. A frontal cortex event-related potential driven by the basal forebrain

    Science.gov (United States)

    Nguyen, David P; Lin, Shih-Chieh

    2014-01-01

    Event-related potentials (ERPs) are widely used in both healthy and neuropsychiatric conditions as physiological indices of cognitive functions. Contrary to the common belief that cognitive ERPs are generated by local activity within the cerebral cortex, here we show that an attention-related ERP in the frontal cortex is correlated with, and likely generated by, subcortical inputs from the basal forebrain (BF). In rats performing an auditory oddball task, both the amplitude and timing of the frontal ERP were coupled with BF neuronal activity in single trials. The local field potentials (LFPs) associated with the frontal ERP, concentrated in deep cortical layers corresponding to the zone of BF input, were similarly coupled with BF activity and consistently triggered by BF electrical stimulation within 5–10 msec. These results highlight the important and previously unrecognized role of long-range subcortical inputs from the BF in the generation of cognitive ERPs. DOI: http://dx.doi.org/10.7554/eLife.02148.001 PMID:24714497

  18. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials

    Science.gov (United States)

    2014-01-01

    Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900

  19. Event-Related Potentials in Parkinson’s Disease: A Review

    Directory of Open Access Journals (Sweden)

    E. Růžička

    1993-01-01

    Full Text Available This article reviews the findings of event-related potentials (ERP in Parkinson's disease (PD published during the last 10 years. Basic principles and methods of ERP are briefly presented with particular regard to the auditory “odd-ball” paradigm almost uniquely employed for the ERP assessment in PD to date. The results of respective studies are overviewed and discussed with respect to three main axes: (1 The slowing down of cognitive processing in PD is reflected by the delays of N2 and P3 components of ERP which are more important in demented than in non-demented patients. The Nl component is delayed in demented patients with PD as well as in other dementias of presumed subcortical origin. (2 Various neuropsychological deficits observed in PD correlate with the delays of ERP evoking the implication of common subcortico-cortical cerebral mechanisms. (3 The variations of ERP under dopaminergic manipulation suggest conflicting effects of levodopa treatment on cognition, at least in certain categories of PD patients. These findings are discussed in the light of current knowledge on neurotransmitter brain systems and some hypothetic explanations are proposed. Finally, an attempt is made to outline further perspectives of clinical and research utilization of ERP in Parkinson's disease.

  20. Cognitive deficits in amyotrophic lateral sclerosis evaluated by event-related potentials.

    Science.gov (United States)

    Ogawa, Tomohiro; Tanaka, Hideaki; Hirata, Koichi

    2009-04-01

    To determine the cognitive profiles in non-demented, relatively less handicapped patients with early-stage sporadic amyotrophic lateral sclerosis (ALS) by using neuropsychological tests, event-related potentials (ERPs) and clinical scale. We recruited 19 patients with sporadic ALS (eight with limb-onset, 11 with bulbar-onset) and 19 controls. In addition to the mini-mental state examination and the Wechsler adult intelligence scale-revised, we assessed the frontal lobe function with Wisconsin card sorting test, Stroop test and trail making test. We used auditory 'oddball' counting paradigm for the ERPs under 20-channel electroencephalogram (EEG) recording. Global field power (GFP) was computed, and its peak amplitudes and latencies of N1/N2/P3 were determined. The results of ERP and neuropsychological tests were correlated with respiratory function and clinical scale. No global cognitive impairment except for subtle frontal dysfunction was detected, although N1/N2/P3 GFP latencies were significantly prolonged in ALS patients than in the controls. Vital capacity correlated with P3 GFP amplitude, and the relative bulbar functional rating scale correlated with P3 GFP latency. Our findings indicated the presence of sub-clinical cognitive deficits in non-demented, sporadic ALS patients. In addition, clinical sub-types and respiratory function dependently influenced cognitive function in patients with sporadic ALS. ERP confirmed cognitive impairment in patients with sporadic ALS.

  1. P300 event-related potentials in children with dyslexia.

    Science.gov (United States)

    Papagiannopoulou, Eleni A; Lagopoulos, Jim

    2017-04-01

    To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in (i) P300 latency, globally, but greatest in frontal brain regions and (ii) decreased P300 amplitude confined to the central brain regions (Fig. 1). These findings reflect abnormalities associated with a diminished capacity to process mental workload as well as delayed processing of this information in children with dyslexia. Furthermore, the topographical distribution of these findings suggests a distinct spatial distribution for the observed P300 abnormalities. This information may be useful in future therapeutic or brain stimulation intervention trials.

  2. Differing Event-Related Patterns of Gamma-Band Power in Brain Waves of Fast- and Slow-Reacting Subjects

    Science.gov (United States)

    1994-05-01

    Wilhelm Wundt proposed that there are two types of subjects in sim- ple RT experiments: fast-reacting subjects, who respond before they fully...quickly as possible to auditory stimuli. This result appears to confirm long-standing speculations of Wundt that fast- and slow-reacting subjects...accord with the hypothesis of Wundt and others that slower ("sensorial") responders wait to fully perceive a stimulus and then react to their perception

  3. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  4. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Procedures for central auditory processing screening in schoolchildren.

    Science.gov (United States)

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that

  6. Event-related potentials, cognition, and behavior: a biological approach.

    Science.gov (United States)

    Kotchoubey, Boris

    2006-01-01

    The prevailing cognitive-psychological accounts of event-related brain potentials (ERPs) assume that ERP components manifest information processing operations leading from stimulus to response. Since this view encounters numerous difficulties already analyzed in previous studies, an alternative view is presented here that regards cortical control of behavior as a repetitive sensorimotor cycle consisting of two phases: (i) feedforward anticipation and (ii) feedback cortical performance. This view allows us to interpret in an integrative manner numerous data obtained from very different domains of ERP studies: from biophysics of ERP waves to their relationship to the processing of language, in which verbal behavior is viewed as likewise controlled by the same two basic control processes: feedforward (hypothesis building) and feedback (hypothesis checking). The proposed approach is intentionally simplified, explaining numerous effects on the basis of few assumptions and relating several levels of analysis: neurophysiology, macroelectrical processes (i.e. ERPs), cognition and behavior. It can, therefore, be regarded as a first approximation to a general theory of ERPs.

  7. Early event related fields during visually evoked pain anticipation.

    Science.gov (United States)

    Gopalakrishnan, Raghavan; Burgess, Richard C; Plow, Ela B; Floden, Darlene P; Machado, Andre G

    2016-03-01

    Pain experience is not only a function of somatosensory inputs. Rather, it is strongly influenced by cognitive and affective pathways. Pain anticipatory phenomena, an important limitation to rehabilitative efforts in the chronic state, are processed by associative and limbic networks, along with primary sensory cortices. Characterization of neurophysiological correlates of pain anticipation, particularly during very early stages of neural processing is critical for development of therapeutic interventions. Here, we utilized magnetoencephalography to study early event-related fields (ERFs) in healthy subjects exposed to a 3 s visual countdown task that preceded a painful stimulus, a non-painful stimulus or no stimulus. We found that the first countdown cue, but not the last cue, evoked critical ERFs signaling anticipation, attention and alertness to the noxious stimuli. Further, we found that P2 and N2 components were significantly different in response to first-cues that signaled incoming painful stimuli when compared to non-painful or no stimuli. The findings indicate that early ERFs are relevant neural substrates of pain anticipatory phenomena and could be potentially serve as biomarkers. These measures could assist in the development of neurostimulation approaches aimed at curbing the negative effects of pain anticipation during rehabilitation. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Attention in essential tremor: evidence from event-related potentials.

    Science.gov (United States)

    Pauletti, C; Mannarelli, D; Locuratolo, N; Vanacore, N; De Lucia, M C; Mina, C; Fattapposta, F

    2013-07-01

    Clinically subtle executive dysfunctions have recently been described in essential tremor (ET), though the presence of attentional deficits is still unclear. We investigated the psychophysiological aspects of attention in ET, using event-related potentials (ERPs). Twenty-one non-demented patients with ET and 21 age- and sex-matched healthy controls underwent a psychophysiological evaluation. P300 components and the Contingent Negative Variation (CNV) were recorded. The latencies and amplitudes of the P3a and P3b subcomponents and CNV areas were evaluated. Possible correlations between clinical parameters and ERP data were investigated. P3a latency was significantly longer in the ET group (p attentive circuits, while the memory context-updating process appears to be spared. This selective cognitive dysfunction does not appear to interfere with the attentional set linked to the expectancy evaluated during a complex choice-reaction time task, which is preserved in ET. This multitask psychophysiological approach reveals the presence of a peculiar attentional deficit in patients with ET, thus expanding the clinical features of this disease.

  9. Agency attribution: event-related potentials and outcome monitoring.

    Science.gov (United States)

    Bednark, Jeffery G; Franz, Elizabeth A

    2014-04-01

    Knowledge about the effects of our actions is an underlying feature of voluntary behavior. Given the importance of identifying the outcomes of our actions, it has been proposed that the sensory outcomes of self-made actions are inherently different from those of externally caused outcomes. Thus, the outcomes of self-made actions are likely to be more motivationally significant for an agent. We used event-related potentials to investigate the relationship between the perceived motivational significance of an outcome and the attribution of agency in the presence of others. In our experiment, we assessed agency attribution in the presence of another agent by varying the degree of contiguity between participants' self-made actions and the sensory outcome. Specifically, we assessed the feedback correct-related positivity (fCRP) and the novelty P3 measures of an outcome's motivational significance and unexpectedness, respectively. Results revealed that both the fCRP and participants' agency attributions were significantly influenced by action-outcome contiguity. However, when action-outcome contiguity was ambiguous, novelty P3 amplitude was a reliable indicator of agency attribution. Prior agency attributions were also found to influence attribution in trials with ambiguous and low action-outcome contiguity. Participants' use of multiple cues to determine agency is consistent with the cue integration theory of agency. In addition to these novel findings, this study supports growing evidence suggesting that reinforcement processes play a significant role in the sense of agency.

  10. Electrophysiological Evidence of Developmental Changes in the Duration of Auditory Sensory Memory.

    Science.gov (United States)

    Gomes, Hilary; And Others

    1999-01-01

    Investigated developmental change in duration of auditory sensory memory for tonal frequency by measuring mismatch negativity, an electrophysiological component of the auditory event-related potential that is relatively insensitive to attention and does not require a behavioral response. Findings among children and adults suggest that there are…

  11. Iconic Meaning in Music: An Event-Related Potential Study

    Science.gov (United States)

    Luo, Qiuling; Huang, Hong; Mo, Lei

    2015-01-01

    Although there has been extensive research on the processing of the emotional meaning of music, little is known about other aspects of listeners’ experience of music. The present study investigated the neural correlates of the iconic meaning of music. Event-related potentials (ERP) were recorded while a group of 20 music majors and a group of 20 non-music majors performed a lexical decision task in the context of implicit musical iconic meaning priming. ERP analysis revealed a significant N400 effect of congruency in time window 260-510 ms following the onset of the target word only in the group of music majors. Time-course analysis using 50 ms windows indicated significant N400 effects both within the time window 410-460 ms and 460-510 ms for music majors, whereas only a partial N400 effect during time window 410-460 ms was observed for non-music majors. There was also a trend for the N400 effects in the music major group to be stronger than those in the non-major group in the sub-windows of 310-360ms and 410-460ms. Especially in the sub-window of 410-460 ms, the topographical map of the difference waveforms between congruent and incongruent conditions revealed different N400 distribution between groups; the effect was concentrated in bilateral frontal areas for music majors, but in central-parietal areas for non-music majors. These results imply probable neural mechanism differences underlying automatic iconic meaning priming of music. Our findings suggest that processing of the iconic meaning of music can be accomplished automatically and that musical training may facilitate the understanding of the iconic meaning of music. PMID:26161561

  12. Iconic Meaning in Music: An Event-Related Potential Study.

    Science.gov (United States)

    Cai, Liman; Huang, Ping; Luo, Qiuling; Huang, Hong; Mo, Lei

    2015-01-01

    Although there has been extensive research on the processing of the emotional meaning of music, little is known about other aspects of listeners' experience of music. The present study investigated the neural correlates of the iconic meaning of music. Event-related potentials (ERP) were recorded while a group of 20 music majors and a group of 20 non-music majors performed a lexical decision task in the context of implicit musical iconic meaning priming. ERP analysis revealed a significant N400 effect of congruency in time window 260-510 ms following the onset of the target word only in the group of music majors. Time-course analysis using 50 ms windows indicated significant N400 effects both within the time window 410-460 ms and 460-510 ms for music majors, whereas only a partial N400 effect during time window 410-460 ms was observed for non-music majors. There was also a trend for the N400 effects in the music major group to be stronger than those in the non-major group in the sub-windows of 310-360 ms and 410-460 ms. Especially in the sub-window of 410-460 ms, the topographical map of the difference waveforms between congruent and incongruent conditions revealed different N400 distribution between groups; the effect was concentrated in bilateral frontal areas for music majors, but in central-parietal areas for non-music majors. These results imply probable neural mechanism differences underlying automatic iconic meaning priming of music. Our findings suggest that processing of the iconic meaning of music can be accomplished automatically and that musical training may facilitate the understanding of the iconic meaning of music.

  13. Robust estimation of event-related potentials via particle filter.

    Science.gov (United States)

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Event-related potential correlates of mindfulness meditation competence.

    Science.gov (United States)

    Atchley, R; Klee, D; Memmott, T; Goodrich, E; Wahbeh, H; Oken, B

    2016-04-21

    This cross-sectional study evaluated event-related potentials (ERPs) across three groups: naïve, novice, and experienced meditators as potential physiological markers of mindfulness meditation competence. Electroencephalographic (EEG) data were collected during a target tone detection task and a Breath Counting task. The Breath Counting task served as the mindfulness meditation condition for the novice and experienced meditator groups. Participants were instructed to respond to target tones with a button press in the first task (Tones), and then ignore the primed tones while Breath Counting. The primary outcomes were ERP responses to target tones, namely the N2 and P3, as markers of stimulus discrimination and attention, respectively. As expected, P3 amplitudes elicited by target tones were attenuated within groups during the Breath Counting task in comparison to the Tones task (pmeditator groups displayed greater change in peak-to-trough P3 amplitudes, with higher amplitudes during the Tones condition and more pronounced reductions in P3 amplitudes during the Breath Counting meditation task in comparison to the naïve group. Meditators had stronger P3 amplitude responses to target tones when instructed to attend to the tones, and a greater attenuation of P3 amplitudes when instructed to ignore the same tones during the Breath Counting task. This study introduces the idea of identifying ERP markers as a means of measuring mindfulness meditation competence, and results suggest this may be a valid approach. This information has the potential to improve mindfulness meditation interventions by allowing objective assessment of mindfulness meditation quality. Published by Elsevier Ltd.

  15. Event-related potentials as a measure of sleep disturbance: A tutorial review

    Directory of Open Access Journals (Sweden)

    Kenneth Campbell

    2010-01-01

    Full Text Available This article reviews event-related potentials (ERPs the minute responses of the human brain that are elicited by external auditory stimuli and how the ERPs can be used to measure sleep disturbance. ERPs consist of a series of negative- and positive-going components. A negative component peaking at about 100 ms, N1, is thought to reflect the outcome of a transient detector system, activated by change in the transient energy in an acoustic stimulus. Its output and thus the amplitude of N1 increases as the intensity level of the stimulus is increased and when the rate of presentation is slowed. When the output reaches a certain critical level, operations of the central executive are interrupted and attention is switched to the auditory channel. This switching of attention is thought to be indexed by a later positivity, P3a, peaking between 250 and 300 ms. In order to sleep, consciousness for all but the most relevant of stimuli must be prevented. Thus, during sleep onset and definitive non-rapid eye movement (NREM sleep, the amplitude of N1 diminishes to near-baseline level. The amplitude of P2, peaking from 180 to 200 ms, is however larger in NREM sleep than in wakefulness. P2 is thought to reflect an inhibitory process protecting sleep from irrelevant disturbance. As stimulus input becomes increasingly obtrusive, the amplitude of P2 also increases. With increasing obtrusiveness particularly when stimuli are presented slowly, a later large negativity, peaking at about 350 ms, N350, becomes apparent. N350 is unique to sleep, its amplitude also increasing as the stimulus becomes more obtrusive. Many authors postulate that when the N350 reaches a critical amplitude, a very large amplitude N550, a component of the K-Complex is elicited. The K-Complex can only be elicited during NREM sleep. The P2, N350 and N550 processes are thus conceived as sleep protective mechanisms, activated sequentially as the risk for disturbance increases. During REM sleep

  16. Separating acoustic deviance from novelty during the first year of life: a review of event-related potential evidence

    Science.gov (United States)

    Kushnerenko, Elena V.; Van den Bergh, Bea R. H.; Winkler, István

    2013-01-01

    Orienting to salient events in the environment is a first step in the development of attention in young infants. Electrophysiological studies have indicated that in newborns and young infants, sounds with widely distributed spectral energy, such as noise and various environmental sounds, as well as sounds widely deviating from their context elicit an event-related potential (ERP) similar to the adult P3a response. We discuss how the maturation of event-related potentials parallels the process of the development of passive auditory attention during the first year of life. Behavioral studies have indicated that the neonatal orientation to high-energy stimuli gradually changes to attending to genuine novelty and other significant events by approximately 9 months of age. In accordance with these changes, in newborns, the ERP response to large acoustic deviance is dramatically larger than that to small and moderate deviations. This ERP difference, however, rapidly decreases within first months of life and the differentiation of the ERP response to genuine novelty from that to spectrally rich but repeatedly presented sounds commences during the same period. The relative decrease of the response amplitudes elicited by high-energy stimuli may reflect development of an inhibitory brain network suppressing the processing of uninformative stimuli. Based on data obtained from healthy full-term and pre-term infants as well as from infants at risk for various developmental problems, we suggest that the electrophysiological indices of the processing of acoustic and contextual deviance may be indicative of the functioning of auditory attention, a crucial prerequisite of learning and language development. PMID:24046757

  17. Event-related potential correlates of paranormal ideation and unusual experiences.

    Science.gov (United States)

    Sumich, Alex; Kumari, Veena; Gordon, Evian; Tunstall, Nigel; Brammer, Michael

    2008-01-01

    Separate dimensions of schizotypy have been differentially associated with electrophysiological measures of brain function, and further shown to be modified by sex/gender. We investigated event-related potential (ERP) correlates of two subdimensions of positive schizotypy, paranormal ideation (PI) and unusual experiences (UEs). Seventy-two individuals with no psychiatric diagnosis (men=36) completed self-report measures of UE and PI and performed an auditory oddball task. Average scores for N100, N200 and P300 amplitudes were calculated for left and right anterior, central and posterior electrode sites. Multiple linear regression was used to examine the relationships between the measures of schizotypy and ERPs across the entire sample, as well as separately according to sex. PI was inversely associated with P300 amplitude at left-central sites across the entire sample, and at right-anterior electrodes in women only. Right-anterior P300 and right-posterior N100 amplitudes were negatively associated with UE in women only. Across the entire sample, UE was negatively associated with left-central N100 amplitude, and positively associated with left-anterior N200 amplitude. These results provide support from electrophysiological measures for the fractionation of the positive dimension of schizotypy into subdimensions of PI and UE, and lend indirect support to dimensional or quasidimensional conceptions of psychosis. More specifically, they suggest that PI may be associated with alteration in contextual updating processes, and that UE may reflect altered sensory/early-attention (N100) mechanisms. The sex differences observed are consistent with those previously observed in individuals with schizophrenia.

  18. An Event Related Field Study of Rapid Grammatical Plasticity in Adult Second-Language Learners

    Science.gov (United States)

    Bastarrika, Ainhoa; Davidson, Douglas J.

    2017-01-01

    The present study used magnetoencephalography (MEG) to investigate how Spanish adult learners of Basque respond to morphosyntactic violations after a short period of training on a small fragment of Basque grammar. Participants (n = 17) were exposed to violation and control phrases in three phases (pretest, training, generalization-test). In each phase participants listened to short Basque phrases and they judged whether they were correct or incorrect. During the pre-test and generalization-test, participants did not receive any feedback. During the training blocks feedback was provided after each response. We also ran two Spanish control blocks before and after training. We analyzed the event-related magnetic- field (ERF) recorded in response to a critical word during all three phases. In the pretest, classification was below chance and we found no electrophysiological differences between violation and control stimuli. Then participants were explicitly taught a Basque grammar rule. From the first training block participants were able to correctly classify control and violation stimuli and an evoked violation response was present. Although the timing of the electrophysiological responses matched participants' L1 effect, the effect size was smaller for L2 and the topographical distribution differed from the L1. While the L1 effect was bilaterally distributed on the auditory sensors, the L2 effect was present at right frontal sensors. During training blocks two and three, the violation-control effect size increased and the topography evolved to a more L1-like pattern. Moreover, this pattern was maintained in the generalization test. We conclude that rapid changes in neuronal responses can be observed in adult learners of a simple morphosyntactic rule, and that native-like responses can be achieved at least in small fragments of second language. PMID:28174530

  19. Touching lips and hearing fingers: effector-specific congruency between tactile and auditory stimulation modulates N1 amplitude and alpha desynchronization.

    Science.gov (United States)

    Shen, Guannan; Meltzoff, Andrew N; Marshall, Peter J

    2018-01-01

    Understanding the interactions between audition and sensorimotor processes is of theoretical importance, particularly in relation to speech processing. Although one current focus in this area is on interactions between auditory perception and the motor system, there has been less research on connections between the auditory and somatosensory modalities. The current study takes a novel approach to this omission by examining specific auditory-tactile interactions in the context of speech and non-speech sound production. Electroencephalography was used to examine brain responses when participants were presented with speech syllables (a bilabial sound /pa/ and a non-labial sound /ka/) or finger-snapping sounds that were simultaneously paired with tactile stimulation of either the lower lip or the right middle finger. Analyses focused on the sensory-evoked N1 in the event-related potential and the extent of alpha band desynchronization elicited by the stimuli. N1 amplitude over fronto-central sites was significantly enhanced when the bilabial /pa/ sound was paired with tactile lip stimulation and when the finger-snapping sound was paired with tactile stimulation of the finger. Post-stimulus alpha desynchronization at central sites was also enhanced when the /pa/ sound was accompanied by tactile stimulation of the lip. These novel findings indicate that neural aspects of somatosensory-auditory interactions are influenced by the congruency between the location of the bodily touch and the bodily origin of a perceived sound.

  20. Effects of alcohol on attention orienting and dual-task performance during simulated driving: an event-related potential study.

    Science.gov (United States)

    Wester, Anne E; Verster, Joris C; Volkerts, Edmund R; Böcker, Koen B E; Kenemans, J Leon

    2010-09-01

    Driving is a complex task and is susceptible to inattention and distraction. Moreover, alcohol has a detrimental effect on driving performance, possibly due to alcohol-induced attention deficits. The aim of the present study was to assess the effects of alcohol on simulated driving performance and attention orienting and allocation, as assessed by event-related potentials (ERPs). Thirty-two participants completed two test runs in the Divided Attention Steering Simulator (DASS) with blood alcohol concentrations (BACs) of 0.00%, 0.02%, 0.05%, 0.08% and 0.10%. Sixteen participants performed the second DASS test run with a passive auditory oddball to assess alcohol effects on involuntary attention shifting. Sixteen other participants performed the second DASS test run with an active auditory oddball to assess alcohol effects on dual-task performance and active attention allocation. Dose-dependent impairments were found for reaction times, the number of misses and steering error, even more so in dual-task conditions, especially in the active oddball group. ERP amplitudes to novel irrelevant events were also attenuated in a dose-dependent manner. The P3b amplitude to deviant target stimuli decreased with blood alcohol concentration only in the dual-task condition. It is concluded that alcohol increases distractibility and interference from secondary task stimuli, as well as reduces attentional capacity and dual-task integrality.

  1. Euclidean distance and Kolmogorov-Smirnov analyses of multi-day auditory event-related potentials: a longitudinal stability study

    Science.gov (United States)

    Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.

    2015-06-01

    The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.

  2. Euclidean distance and Kolmogorov-Smirnov analyses of multi-day auditory event-related potentials: a longitudinal stability study

    International Nuclear Information System (INIS)

    Durato, M V; Nawang, S A; Albano, A M; Rapp, P E

    2015-01-01

    The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test. (paper)

  3. Visual and Auditory Input in Second-Language Speech Processing

    Science.gov (United States)

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  4. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  5. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  6. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  7. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    Science.gov (United States)

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  8. Event-related potential correlates of suspicious thoughts in individuals with schizotypal personality features.

    Science.gov (United States)

    Li, Xue-bing; Huang, Jia; Cheung, Eric F C; Gong, Qi-yong; Chan, Raymond C K

    2011-01-01

    Suspiciousness is a common feature of schizophrenia. However, suspicious thoughts are also commonly experienced by the general population. This study aimed to examine the underlying neural mechanism of suspicious thoughts in individuals with and without schizotypal personality disorder (SPD)-proneness, using an event-related potential (ERP) paradigm. Electroencephalography (EEG) was recorded when the "feeling of being seen through" was evoked in the participants. The findings showed a prominent positive deflection of the difference wave within the time window 250-400 ms after stimuli presentation in both SPD-prone and non-SPD-prone groups. Furthermore, the P3 amplitude was significantly reduced in the SPD-prone group compared to the non-SPD-prone group. The current density analysis also indicated hypoactivity in both frontal and temporal regions in the SPD-prone group, suggesting that the frontotemporal cortical network may play a role in the onset of suspicious thoughts. The P3 of difference wave was inversely correlated with the cognitive-perception factor and the suspiciousness/paranoid ideation trait, which provided preliminary electrophysiological evidence for the association of suspiciousness with SPD features.

  9. Brain Signals of Face Processing as Revealed by Event-Related Potentials

    Directory of Open Access Journals (Sweden)

    Ela I. Olivares

    2015-01-01

    Full Text Available We analyze the functional significance of different event-related potentials (ERPs as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM. New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing.

  10. Sensitivity to structure in action sequences: An infant event-related potential study.

    Science.gov (United States)

    Monroy, Claire D; Gerson, Sarah A; Domínguez-Martínez, Estefanía; Kaduk, Katharina; Hunnius, Sabine; Reid, Vincent

    2017-05-06

    Infants are sensitive to structure and patterns within continuous streams of sensory input. This sensitivity relies on statistical learning, the ability to detect predictable regularities in spatial and temporal sequences. Recent evidence has shown that infants can detect statistical regularities in action sequences they observe, but little is known about the neural process that give rise to this ability. In the current experiment, we combined electroencephalography (EEG) with eye-tracking to identify electrophysiological markers that indicate whether 8-11-month-old infants detect violations to learned regularities in action sequences, and to relate these markers to behavioral measures of anticipation during learning. In a learning phase, infants observed an actor performing a sequence featuring two deterministic pairs embedded within an otherwise random sequence. Thus, the first action of each pair was predictive of what would occur next. One of the pairs caused an action-effect, whereas the second did not. In a subsequent test phase, infants observed another sequence that included deviant pairs, violating the previously observed action pairs. Event-related potential (ERP) responses were analyzed and compared between the deviant and the original action pairs. Findings reveal that infants demonstrated a greater Negative central (Nc) ERP response to the deviant actions for the pair that caused the action-effect, which was consistent with their visual anticipations during the learning phase. Findings are discussed in terms of the neural and behavioral processes underlying perception and learning of structured action sequences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Early access to lexical-level phonological representations of Mandarin word-forms : evidence from auditory N1 habituation

    NARCIS (Netherlands)

    Yue, Jinxing; Alter, Kai; Howard, David; Bastiaanse, Roelien

    2017-01-01

    An auditory habituation design was used to investigate whether lexical-level phonological representations in the brain can be rapidly accessed after the onset of a spoken word. We studied the N1 component of the auditory event-related electrical potential, and measured the amplitude decrements of N1

  12. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  13. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  14. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    Science.gov (United States)

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback

  15. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  16. Blind Separation of Event-Related Brain Responses into Independent Components

    National Research Council Canada - National Science Library

    Makeig, Scott

    1996-01-01

    .... We report here a method for the blind separation of event-related brain responses into spatially stationary and temporally independent subcomponents using an Independent Component Analysis algorithm...

  17. Is the auditory sensory memory sensitive to visual information?

    Science.gov (United States)

    Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène

    2005-10-01

    The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.

  18. Changes in the Adult Vertebrate Auditory Sensory Epithelium After Trauma

    Science.gov (United States)

    Oesterle, Elizabeth C.

    2012-01-01

    Auditory hair cells transduce sound vibrations into membrane potential changes, ultimately leading to changes in neuronal firing and sound perception. This review provides an overview of the characteristics and repair capabilities of traumatized auditory sensory epithelium in the adult vertebrate ear. Injured mammalian auditory epithelium repairs itself by forming permanent scars but is unable to regenerate replacement hair cells. In contrast, injured non-mammalian vertebrate ear generates replacement hair cells to restore hearing functions. Non-sensory support cells within the auditory epithelium play key roles in the repair processes. PMID:23178236

  19. Verbal Learning and Memory in Cannabis and Alcohol Users: An Event-Related Potential Investigation

    Directory of Open Access Journals (Sweden)

    Janette L. Smith

    2017-12-01

    Full Text Available Aims: Long-term heavy use of cannabis and alcohol are known to be associated with memory impairments. In this study, we used event-related potentials to examine verbal learning and memory processing in a commonly used behavioral task.Method: We conducted two studies: first, a small pilot study of adolescent males, comprising 13 Drug-Naive Controls (DNC, 12 heavy drinkers (HD and 8 cannabis users (CU. Second, a larger study of young adults, comprising 45 DNC (20 female, 39 HD (16 female, and 20 CU (9 female. In both studies, participants completed a modified verbal learning task (the Rey Auditory Verbal Learning Test, RAVLT while brain electrical activity was recorded. ERPs were calculated for words which were subsequently remembered vs. those which were not remembered, and for presentations of learnt words, previously seen words, and new words in a subsequent recognition test. Pre-planned principal components analyses (PCA were used to quantify the ERP components in these recall and recognition phases separately for each study.Results: Memory performance overall was slightly lower than published norms using the standardized RAVLT delivery, but was generally similar and showed the expected changes over trials. Few differences in performance were observed between groups; a notable exception was markedly poorer delayed recall in HD relative to DNC (Study 2. PCA identified components expected from prior research using other memory tasks. At encoding, there were no between-group differences in the usual P2 recall effect (larger for recalled than not-recalled words. However, alcohol-related differences were observed in a larger P540 (indexing recollection in HD than DNC, and cannabis-related differences were observed in a smaller N340 (indexing familiarity and a lack of previously seen > new words effect for P540 in Study 2.Conclusions: This study is the first examination of ERPs in the RAVLT in healthy control participants, as well as substance

  20. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    Science.gov (United States)

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  1. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    Directory of Open Access Journals (Sweden)

    Kongmeng Liew

    2018-02-01

    Full Text Available Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  2. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  3. Perceptual processing of a complex auditory context

    DEFF Research Database (Denmark)

    Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas

    The mismatch negativity (MMN) is a brain response elicited by deviants in a series of repetitive sounds. It reflects the perception of change in low-level sound features and reliably measures perceptual auditory memory. However, most MMN studies use simple tone patterns as stimuli, failing...

  4. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models...

  5. The attenuation of auditory neglect by implicit cues.

    Science.gov (United States)

    Coleman, A Rand; Williams, J Michael

    2006-09-01

    This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.

  6. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  7. Age, intelligence, and event-related brain potentials during late childhood: A longitudinal study.

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Molen, M.W.; Stauder, J.E.A.

    2003-01-01

    he relation between event-related brain activity, age, and intelligence was studied using a visual oddball task presented longitudinally to girls at 9, 10, and 11 years of age. The event-related brain potential (ERP) components showed typical gradual decrements in latency and amplitude with

  8. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  9. A longitudinal, event-related potential pilot study of adult obsessive-compulsive disorder with 1-year follow-up

    Directory of Open Access Journals (Sweden)

    Yamamuro K

    2016-09-01

    Full Text Available Kazuhiko Yamamuro,1 Koji Okada,2 Naoko Kishimoto,1 Toyosaku Ota,1 Junzo Iida,3 Toshifumi Kishimoto1 1Department of Psychiatry, Nara Medical University School of Medicine, 2Department of Psychiatry, Jingumaecocorono-Clinic, 3Faculty of Nursing, Nara Medical University School of Medicine, Kashihara, Japan Aim: Earlier brain imaging research studies have suggested that brain abnormalities in obsessive-compulsive disorder (OCD normalize as clinical symptoms improve. However, although many studies have investigated event-related potentials (ERPs in patients with OCD compared with healthy control subjects, it is currently unknown whether ERP changes reflect pharmacological and psychotherapeutic effects. As such, the current study examined the neurocognitive components of OCD to elucidate the pathophysiological abnormalities involved in the disorder, including the frontal-subcortical circuits.Methods: The Yale-Brown Obsessive-Compulsive Scale was used to evaluate 14 adult patients with OCD. The present study also included ten age-, sex-, and IQ-matched controls. The P300 and mismatch negativity (MMN components during an auditory oddball task at baseline for both groups and after 1 year of treatment for patients with OCD were measured.Results: Compared with controls, P300 amplitude was attenuated in the OCD group at Cz and C4 at baseline. Pharmacotherapy and psychotherapy treatment for 1 year reduced OCD symptomology. P300 amplitude after 1 year of treatment was significantly increased, indicating normalization compared with baseline at Fz, Cz, C3, and C4. We found no differences in P300 latency, MMN amplitude, or MMN latency between baseline and after one year of treatment.Conclusion: ERPs may be a useful tool for evaluating pharmacological and cognitive behavioral therapy in adult patients with OCD. Keywords: obsessive-compulsive disorder, event-related potentials, P300, mismatch negativity, improvement

  10. Cognitive mechanisms associated with auditory sensory gating

    Science.gov (United States)

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  11. Minimal effects of visual memory training on the auditory performance of adult cochlear implant users

    Science.gov (United States)

    Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie

    2014-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087

  12. Retrieving self-vocalized information: An event-related potential (ERP) study on the effect of retrieval orientation.

    Science.gov (United States)

    Rosburg, Timm; Johansson, Mikael; Sprondel, Volker; Mecklinger, Axel

    2014-11-18

    Retrieval orientation refers to a pre-retrieval process and conceptualizes the specific form of processing that is applied to a retrieval cue. In the current event-related potential (ERP) study, we sought to find evidence for an involvement of the auditory cortex when subjects attempt to retrieve vocalized information, and hypothesized that adopting retrieval orientation would be beneficial for retrieval accuracy. During study, participants saw object words that they subsequently vocalized or visually imagined. At test, participants had to identify object names of one study condition as targets and to reject object names of the second condition together with new items. Target category switched after half of the test trials. Behaviorally, participants responded less accurately and more slowly to targets of the vocalize condition than to targets of the imagine condition. ERPs to new items varied at a single left electrode (T7) between 500 and 800ms, indicating a moderate retrieval orientation effect in the subject group as a whole. However, whereas the effect was strongly pronounced in participants with high retrieval accuracy, it was absent in participants with low retrieval accuracy. A current source density (CSD) mapping of the retrieval orientation effect indicated a source over left temporal regions. Independently from retrieval accuracy, the ERP retrieval orientation effect was surprisingly also modulated by test order. Findings are suggestive for an involvement of the auditory cortex in retrieval attempts of vocalized information and confirm that adopting retrieval orientation is potentially beneficial for retrieval accuracy. The effects of test order on retrieval-related processes might reflect a stronger focus on the newness of items in the more difficult test condition when participants started with this condition. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  14. Anticipation of electric shocks modulates low beta power and event-related fields during memory encoding.

    Science.gov (United States)

    Bauch, Eva M; Bunzeck, Nico

    2015-09-01

    In humans, the temporal and oscillatory dynamics of pain anticipation and its effects on long-term memory are largely unknown. Here, we investigated this open question by using a previously established behavioral paradigm in combination with magnetoencephalography (MEG). Healthy human subjects encoded a series of scene images, which was combined with cues predicting an aversive electric shock with different probabilities (0.2, 0.5 or 0.8). After encoding, memory for the studied images was tested using a remember/know recognition task. Behaviorally, pain anticipation did not modulate recollection-based recognition memory per se, but interacted with the perceived unpleasantness of the electric shock [visual analogue scale rating from 1 (not unpleasant) to 10 (highly unpleasant)]. More precisely, the relationship between pain anticipation and recollection followed an inverted u-shaped function the more unpleasant the shocks were rated by a subject. At the physiological level, this quadratic effect was mimicked in the event-related magnetic fields associated with successful memory formation ('DM-effect') ∼450ms after image onset at left frontal sensors. Importantly, across all subjects, shock anticipation modulated oscillatory power in the low beta frequency range (13-20Hz) in a linear fashion at left temporal sensors. Taken together, our findings indicate that beta oscillations provide a generic mechanism underlying pain anticipation; the effect on subsequent long-term memory, on the other hand, is much more variable and depends on the level of individual pain perception. As such, our findings give new and important insights into how aversive motivational states can drive memory formation. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. How Social Ties Influence Consumer: Evidence from Event-Related Potentials.

    Directory of Open Access Journals (Sweden)

    Jing Luan

    Full Text Available A considerable amount of marketing research has reported that consumers are more saliently influenced by friends (strong social ties than by acquaintances and strangers (weak social ties. To shed light on the neural and psychological processes underlying such phenomenon, in this study we designed an amended S1-S2 paradigm (product-[reviewer-review] that is based on realistic consumer purchase experiences. After incoming all given information (product, reviewer, review, participants were required to state their purchase intentions. The neurocognitive and emotional processes related to friend and stranger stimuli were delineated to suggest how social ties influence consumers during their shopping processes. Larger P2 (fronto-central scalp areas and P3 (central and posterior-parietal scalp areas components under stranger condition were elicited successfully. These findings demonstrate that the cognitive and emotional processing of friend and stranger stimuli occurs at stages of neural activity, and can be indicated by the P2 and P3 components. Electrophysiological data also support the hypothesis that different neural and emotional processing magnitude and strength underlie friend and stranger effect in the context of consumer purchase. During this process, the perception of stimuli evoked P2, subsequently emotional processing and attention modulation were activated and indicated by P2 and P3. The friend dominated phenomenon can be interpreted as the result of distinctive neurocognitive and emotional processing magnitude, which suggests that psychological and emotional factors can guide consumer decision making. This study consolidates that event related potential (ERP methodology is likely to be a more sensitive method for investigating consumer behaviors. From the perspectives of management and marketing, our findings show that the P2 and P3 components can be employed as an indicator to probe the influential factors of consumer purchase

  16. Using event related potentials to identify a user's behavioural intention aroused by product form design.

    Science.gov (United States)

    Ding, Yi; Guo, Fu; Zhang, Xuefeng; Qu, Qingxing; Liu, Weilin

    2016-07-01

    The capacity of product form to arouse user's behavioural intention plays a decisive role in further user experience, even in purchase decision, while traditional methods rarely give a fully understanding of user experience evoked by product form, especially the feeling of anticipated use of product. Behavioural intention aroused by product form designs has not yet been investigated electrophysiologically. Hence event related potentials (ERPs) were applied to explore the process of behavioural intention when users browsed different smart phone form designs with brand and price not taken into account for mainly studying the brain activity evoked by variety of product forms. Smart phone pictures with different anticipated user experience were displayed with equiprobability randomly. Participants were asked to click the left mouse button when certain picture gave them a feeling of behavioural intention to interact with. The brain signal of each participant was recorded by Curry 7.0. The results show that pictures with an ability to arouse participants' behavioural intention for further experience can evoke enhanced N300 and LPPs (late positive potentials) in central-parietal, parietal and occipital regions. The scalp topography shows that central-parietal, parietal and occipital regions are more activated. The results indicate that the discrepancy of ERPs can reflect the neural activities of behavioural intention formed or not. Moreover, amplitude of ERPs occurred in corresponding brain areas can be used to measure user experience. The exploring of neural correlated with behavioural intention provide an accurate measurement method of user's perception and help marketers to know which product can arouse users' behavioural intention, maybe taken as an evaluating indicator of product design. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. How Social Ties Influence Consumer: Evidence from Event-Related Potentials.

    Science.gov (United States)

    Luan, Jing; Yao, Zhong; Bai, Yan

    2017-01-01

    A considerable amount of marketing research has reported that consumers are more saliently influenced by friends (strong social ties) than by acquaintances and strangers (weak social ties). To shed light on the neural and psychological processes underlying such phenomenon, in this study we designed an amended S1-S2 paradigm (product-[reviewer-review]) that is based on realistic consumer purchase experiences. After incoming all given information (product, reviewer, review), participants were required to state their purchase intentions. The neurocognitive and emotional processes related to friend and stranger stimuli were delineated to suggest how social ties influence consumers during their shopping processes. Larger P2 (fronto-central scalp areas) and P3 (central and posterior-parietal scalp areas) components under stranger condition were elicited successfully. These findings demonstrate that the cognitive and emotional processing of friend and stranger stimuli occurs at stages of neural activity, and can be indicated by the P2 and P3 components. Electrophysiological data also support the hypothesis that different neural and emotional processing magnitude and strength underlie friend and stranger effect in the context of consumer purchase. During this process, the perception of stimuli evoked P2, subsequently emotional processing and attention modulation were activated and indicated by P2 and P3. The friend dominated phenomenon can be interpreted as the result of distinctive neurocognitive and emotional processing magnitude, which suggests that psychological and emotional factors can guide consumer decision making. This study consolidates that event related potential (ERP) methodology is likely to be a more sensitive method for investigating consumer behaviors. From the perspectives of management and marketing, our findings show that the P2 and P3 components can be employed as an indicator to probe the influential factors of consumer purchase intentions.

  18. Cerebral Responses to Vocal Attractiveness and Auditory Hallucinations in Schizophrenia: A Functional MRI Study

    Directory of Open Access Journals (Sweden)

    Michihiko eKoeda

    2013-05-01

    Full Text Available Impaired self-monitoring and abnormalities of cognitive bias have been implicated as cognitive mechanisms of hallucination; regions fundamental to these processes including inferior frontal gyrus (IFG and superior temporal gyrus (STG are abnormally activated in individuals that hallucinate. A recent study showed activation in IFG-STG to be modulated by auditory attractiveness, but no study has investigated whether these IFG-STG activations are impaired in schizophrenia. We aimed to clarify the cerebral function underlying the perception of auditory attractiveness in schizophrenia patients. Cerebral activation was examined in 18 schizophrenia patients and 18 controls when performing Favourability Judgment Task (FJT and Gender Differentiation Task (GDT for pairs of greetings using event-related functional MRI. A full-factorial analysis revealed that the main effect of task was associated with activation of left IFG and STG. The main effect of Group revealed less activation of left STG in schizophrenia compared with controls, whereas significantly greater activation in schizophrenia than in controls was revealed at the left middle frontal gyrus (MFG, right temporo-parietal junction (TPJ, right occipital lobe, and right amygdala (p<0.05, FDR-corrected. A significant positive correlation was observed at the right TPJ and right MFG between cerebral activation under FJT minus GDT contrast and the score of hallucinatory behaviour on the Positive and Negative Symptom Scale. Findings of hypo-activation in the left STG could designate brain dysfunction in accessing vocal attractiveness in schizophrenia, whereas hyper-activation in the right TPJ and MFG may reflect the process of mentalizing other person’s behaviour by auditory hallucination by abnormality of cognitive bias.

  19. Memory for sound, with an ear toward hearing in complex auditory scenes.

    Science.gov (United States)

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  20. Introducing the event related fixed interval area (ERFIA) multilevel technique: a method to analyze the complete epoch of event-related potentials at single trial level

    NARCIS (Netherlands)

    Vossen, C.J.; Vossen, H.G.M.; Marcus, M.A.E.; van Os, J.; Lousberg, R.

    2013-01-01

    In analyzing time-locked event-related potentials (ERPs), many studies have focused on specific peaks and their differences between experimental conditions. In theory, each latency point after a stimulus contains potentially meaningful information, regardless of whether it is peak-related. Based on

  1. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  2. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  3. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  4. Abnormalities of Early “Memory-Scanning” Event-Related Potentials in Patients with Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    A. Grippo

    1994-01-01

    Full Text Available We have recorded auditory event-related potentials (ERPs evoked by the “memory-scanning” (digit-probe identification/matching paradigm that was originally described by Sternberg (1966, in 17 patients with complex partial seizures (temporal lobe epilepsy and in 17 matched healthy control subjects. The patients, who had all complained spontaneously of memory difficulties, had significantly reduced scores on psychological tests of memory with relatively intact digit span and cognition. Their performance of the memory-scanning task was characterized by a higher error rate, longer reaction times and an increased slope of the reaction time/set size relationship. The associated ERPs in both patients and controls showed there were significant effects of memory load on several major components, but only a reduced amplitude of the N170 and a prolonged latency of the N290 waves distinguished the patients. In addition, the N170 wave in the patients decreased further as memory load increased. The prolonged N290 latency in the patients appeared to reflect the slowed processing time. This study has shown that ERPs generated by a short-term memory task are abnormal in patients with temporal lobe epilepsy who have neuropsychologically documented cognitive and memory deficits. Some of the significant waveform alterations occur earlier than those reported in previous ERP studies and provide electrophysiological support for the hypothesis that abnormalities of the early stages of short-term memory processing may contribute to the memory difficulties experienced by patients with temporal lobe epilepsy.

  5. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  6. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials( ).

    Science.gov (United States)

    van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A

    2005-05-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.

  7. Cognition and event-related potentials in adult-onset non-demented myotonic dystrophy type 1.

    Science.gov (United States)

    Tanaka, H; Arai, M; Harada, M; Hozumi, A; Hirata, K

    2012-02-01

    To clarify the cognitive and event-related potentials (ERPs) profiles of adult-onset genetically-proven non-demented myotonic dystrophy type 1 (DM1). Fourteen DM1 patients and matched 14 normal controls were enrolled. DM1 patients were compared with normal controls, using a variety of neuropsychological tests; an auditory "oddball" counting paradigm for the ERPs, and low-resolution brain electromagnetic tomography (LORETA). For patients, ERPs and neuropsychological parameters were correlated with CTG repeat size, duration of illness, grip strength, and arterial blood gas analysis. Frontal lobe dysfunction, prolonged N1 latency, and attenuated N2/P3 amplitudes were observed in DM1. Longer CTG repeat size was associated with fewer categories achieved on Wisconsin Card Sorting Test. Greater grip strength was associated with better scores on color-word "interference" of Stroop test. P3 latency was negatively correlated with PaO(2). LORETA revealed significant hypoactivities at the orbitofrontal and medial temporal lobe, cingulate, and insula. There was no correlation between ERPs and CTG expansion. Adult-onset non-demented DM1 presented frontal lobe dysfunction. Absence of correlations between CTG repeat size and objective ERP parameters suggested CTG expansion in lymphocytes does not directly contribute to cognitive dysfunction. CTG expansion in lymphocytes does not directly contribute to cognitive dysfunction of adult-onset non-demented DM1. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Event-related potentials reflect the efficacy of pharmaceutical treatments in children and adolescents with attention deficit/hyperactivity disorder.

    Science.gov (United States)

    Yamamuro, Kazuhiko; Ota, Toyosaku; Iida, Junzo; Nakanishi, Yoko; Matsuura, Hiroki; Uratani, Mitsuhiro; Okazaki, Kosuke; Kishimoto, Naoko; Tanaka, Shohei; Kishimoto, Toshifumi

    2016-08-30

    Few objective biological measures of pharmacological treatment efficacy exist for attention deficit/hyperactivity disorder (ADHD). Although we have previously demonstrated that event-related potentials (ERPs) reflect the effects of osmotic-release methylphenidate in treatment of naïve pediatric patients with ADHD, whether this is true for the therapeutic effects of atomoxetine (ATX) is unknown. Here, we used the Japanese version of the ADHD rating-scale IV to evaluate 14 patients with ADHD, and compared their ERP data with 14 age- and sex-matched controls. We measured P300 and mismatch negativity (MMN) components during an auditory oddball task before treatment (treatment naïve) and after 2 months of ATX treatment. Compared with controls, P300 components at baseline were attenuated and prolonged in the ADHD group at Fz (fronto-central), Cz (centro-parietal), Pz (parietal regions), C3 and C4 electrodes. ATX treatment reduced ADHD symptomology, and after 2 months of treatment, P300 latencies at Fz, Cz, Pz, C3, and C4 electrodes were significantly shorter than those at baseline. Moreover, MMN amplitudes at Cz and C3 electrodes were significantly greater than those at baseline. Thus, ERPs may be useful for evaluating the pharmacological effects of ATX in pediatric and adolescent patients with ADHD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Functional magnetic resonance imaging measure of automatic and controlled auditory processing

    OpenAIRE

    Mitchell, Teresa V.; Morey, Rajendra A.; Inan, Seniha; Belger, Aysenil

    2005-01-01

    Activity within fronto-striato-temporal regions during processing of unattended auditory deviant tones and an auditory target detection task was investigated using event-related functional magnetic resonance imaging. Activation within the middle frontal gyrus, inferior frontal gyrus, anterior cingulate gyrus, superior temporal gyrus, thalamus, and basal ganglia were analyzed for differences in activity patterns between the two stimulus conditions. Unattended deviant tones elicited robust acti...

  10. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  11. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers.

    Science.gov (United States)

    Flanagan, Sheila; Zorilă, Tudor-Cătălin; Stylianou, Yannis; Moore, Brian C J

    2018-01-01

    Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.

  12. Social context and perceived agency affects empathy for pain: an event-related fMRI investigation.

    Science.gov (United States)

    Akitsuki, Yuko; Decety, Jean

    2009-08-15

    Studying of the impact of social context on the perception of pain in others is important for understanding the role of intentionality in interpersonal sensitivity, empathy, and implicit moral reasoning. Here we used an event-related fMRI with pain and social context (i.e., the number of individuals in the stimuli) as the two factors to investigate how different social contexts and resulting perceived agency modulate the neural response to the perception of pain in others. Twenty-six healthy participants were scanned while presented with short dynamic visual stimuli depicting painful situations accidentally caused by or intentionally caused by another individual. The main effect of perception of pain was associated with signal increase in the aMCC, insula, somatosensory cortex, SMA and PAG. Importantly, perceiving the presence of another individual led to specific hemodynamic increase in regions involved in representing social interaction and emotion regulation including the temporoparietal junction, medial prefrontal cortex, inferior frontal gyrus, and orbitofrontal cortex. Furthermore, the functional connectivity pattern between the left amygdala and other brain areas was modulated by the perceived agency. Our study demonstrates that the social context in which pain occurs modulate the brain response to other's pain. This modulation may reflect successful adaptation to potential danger present in a social interaction. Our results contribute to a better understanding of the neural mechanisms underpinning implicit moral reasoning that concern actions that can harm other people.

  13. Functionally Independent Components of the Late Positive Event-Related Potential During Visual Spatial Attention

    National Research Council Canada - National Science Library

    Makeig, Scott; Westeifleld, Marissa; Jung, Tzyy-Ping; Covington, James; Townsend, Jeanne; Sejnowski, Terrence J; Courchesne, Eric

    1999-01-01

    Human event-related potentials (ERPs) were recorded from 10 subjects presented with visual target and nontarget stimuli at five screen locations and responding to targets presented at one of the locations...

  14. Central auditory processing. Are the emotional perceptions of those listening to classical music inherent in the composition or acquired by the listeners?

    Science.gov (United States)

    Goycoolea, Marcos; Levy, Raquel; Ramírez, Carlos

    2013-04-01

    There is seemingly some inherent component in selected musical compositions that elicits specific emotional perceptions, feelings, and physical conduct. The purpose of the study was to determine if the emotional perceptions of those listening to classical music are inherent in the composition or acquired by the listeners. Fifteen kindergarten students, aged 5 years, from three different sociocultural groups, were evaluated. They were exposed to portions of five purposefully selected classical compositions and asked to describe their emotions when listening to these musical pieces. All were instrumental compositions without human voices or spoken language. In addition, they were played to an audience of an age at which they were capable of describing their perceptions and supposedly had no significant previous experience of classical music. Regardless of their sociocultural background, the children in the three groups consistently identified similar emotions (e.g. fear, happiness, sadness), feelings (e.g. love), and mental images (e.g. giants or dangerous animals walking) when listening to specific compositions. In addition, the musical compositions generated physical conducts that were reflected by the children's corporal expressions. Although the sensations were similar, the way of expressing them differed according to their background.

  15. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere.

    Science.gov (United States)

    Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D

    2014-08-15

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may

  16. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere

    Science.gov (United States)

    Yoncheva; Maurer, Urs; Zevin, Jason; McCandliss, Bruce

    2015-01-01

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective atten tion to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by ma nipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data- driven source localization analyses revealed that selective attention to phonology led to significantly greater re cruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings support the key role of selective attention to phonology in the development of literacy and motivate future research on the neural bases of the interaction between phonological

  17. An event-related potential study on memory search for color

    OpenAIRE

    Miyatani, Makoto; Nakao, Takasi; Ohkawa, Kaori; Sanderson, Nicholas S. R.; Takumi, Ken

    2002-01-01

    The present study focused on memory search processes in nonverbal working memory. Event-related potentials (ERPs) were recorded while subjects engaged in two memory search tasks. Effects of memory set size on event-related potentials were compared between when memory sets consisted of one or four alphabets and when one to three unvocable color patches composed memory sets. In a letter search task, increase of memory set size caused the enlargement of negativities of ERPs between 250 and 450 m...

  18. Spatio-temporal patterns of event-related potentials related to audiovisual synchrony judgments in older adults.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael Julian; Bode, Stefan; McKendrick, Allison Maree

    2017-07-01

    Older adults have altered perception of the relative timing between auditory and visual stimuli, even when stimuli are scaled to equate detectability. To help understand why, this study investigated the neural correlates of audiovisual synchrony judgments in older adults using electroencephalography (EEG). Fourteen younger (18-32 year old) and 16 older (61-74 year old) adults performed an audiovisual synchrony judgment task on flash-pip stimuli while EEG was recorded. All participants were assessed to have healthy vision and hearing for their age. Observers responded to whether audiovisual pairs were perceived as synchronous or asynchronous via a button press. The results showed that the onset of predictive sensory information for synchrony judgments was not different between groups. Channels over auditory areas contributed more to this predictive sensory information than visual areas. The spatial-temporal profile of the EEG activity also indicates that older adults used different resources to maintain a similar level of performance in audiovisual synchrony judgments compared with younger adults. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    OpenAIRE

    Matusz, P.J.; Thelen, A.; Amrein, S.; Geiser, E.; Anken, J.; Murray, M.M.

    2015-01-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a ...

  20. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  2. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  3. Toward a model of phoneme perception.

    Science.gov (United States)

    Ardila, A

    1993-05-01

    Hemisphere asymmetry in phoneme perception was analyzed. Three basic mechanisms underlying phoneme perception are proposed. Left temporal lobe would be specialized in: (1) ultrashort auditory (echoic) memory; (2) higher resolution power for some language frequencies; and (3) recognition of rapidly changing and time-dependent auditory signals. An attempt was made to apply some neurophysiological mechanisms described for the visual system to phoneme recognition in the auditory system.

  4. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    OpenAIRE

    Frtusova, Jana B.; Phillips, Natalie A.

    2016-01-01

    This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between pe...

  5. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  6. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  7. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  8. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical...... the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli....

  9. A P300 event related potential technique for assessment of sexually oriented interest.

    Science.gov (United States)

    Vardi, Yoram; Volos, Michal; Sprecher, Elliot; Granovsky, Yelena; Gruenwald, Ilan; Yarnitsky, David

    2006-12-01

    Despite all of the modern, sophisticated tests that exist for diagnosing and assessing male and female sexual disorders, to our knowledge there is no objective psychophysiological test to evaluate sexual arousal and interest. We provide preliminary data showing a decrease in auditory P300 wave amplitude during exposure to sexually explicit video clips and a significant correlation between the auditory P300 amplitude decrease and self-reported scores of sexual arousal and interest in the clips. A total of 30 healthy subjects were exposed to several blocks of auditory stimuli administered using an oddball paradigm. Baseline auditory P300 amplitudes were obtained and auditory stimuli were then delivered while viewing visual clips with 3 types of content, including sport, scenery and sex. Auditory P300 amplitude significantly decreased during viewing clips of all contents. Viewing sexual content clips caused a maximal decrease in P300 amplitude (p <0.0001). In addition, a high correlation was found between the amplitude decrease and scores on the sexual arousal questionnaire regarding the viewed clips (r = 0.61, p <0.001). In addition, the P300 amplitude decrease was significantly related to the sexual interest score (r = 0.37, p = 0.042) but not to interest in clips of nonsexual content. The change in auditory P300 amplitude during exposure to visual stimuli with sexual context seems to be an objective measure of subject sexual interest. This method might be applied to assess therapeutic intervention and as a diagnostic tool for assessing disorders of impaired libido or psychogenic sexual dysfunction.

  10. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  11. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  12. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. The p300 event-related potential technique for libido assessment in women with hypoactive sexual desire disorder.

    Science.gov (United States)

    Vardi, Yoram; Sprecher, Elliot; Gruenwald, Ilan; Yarnitsky, David; Gartman, Irena; Granovsky, Yelena

    2009-06-01

    There is a need for an objective technique to assess the degree of hypoactive sexual desire disorder (HSDD). Recently, we described such a methodology (event-related potential technique [ERP]) based on recording of p300 electroencephalography (EEG) waves elicited by auditory stimuli during synchronous exposure to erotic films. To compare sexual interest of sexually healthy women to females with sexual dysfunction (FSD) using ERP, and to explore whether FSD women with and without HSDD would respond differently to two different types of erotic stimuli-films containing (I) or not containing (NI) sexual intercourse scenes. Twenty-two women with FSD, of which nine had HSDD only, and 30 sexually healthy women were assessed by the Female Sexual Functioning Index. ERP methodology was performed applying erotic NI or I films. Significant differences in percent of auditory p300 amplitude reduction (PR) in response to erotic stimuli within and between all three groups for each film type. PRs to each film type were similar in sexually healthy women (60.6% +/- 40.3 (NI) and 51.7% +/- 32.3 [I]), while in women with FSD, reduction was greater when viewing the NI vs. I erotic films (71.4% +/- 41.0 vs. 37.7% +/- 45.7; P = 0.0099). This difference was mainly due to the greater PR of the subgroup with HSDD in response to NI vs. I films (77.7% +/- 46.7 vs. 17.0% +/- 50.3) than in the FSD women without HSDD group or the sexually healthy women (67.5% +/- 38.7 vs. 50.4% +/- 39.4 respectively), P = 0.0084. For comparisons, we used the mixed-model one-way analysis of variance. Differences in neurophysiological response patterns between sexually healthy vs. sexually dysfunctional females may point to a specific inverse discrimination ability for sexually relevant information in the subgroup of women with HSDD. These findings suggest that the p300 ERP technique could be used as an objective quantitative tool for libido assessment in sexually dysfunctional women.

  14. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  15. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  16. Distúrbio de voz em professores: autorreferência, avaliação perceptiva da voz e das pregas vocais Voice disorders in teachers: self-report, auditory-perceptive assessment of voice and vocal fold assessment

    Directory of Open Access Journals (Sweden)

    Maria Fabiana Bonfim de Lima-Silva

    2012-12-01

    Full Text Available OBJETIVO: Analisar a presença do distúrbio de voz em professores na concordância entre autorreferência, avaliação perceptiva da voz e das pregas vocais. MÉTODOS: Deste estudo transversal, participaram 60 professores de duas escolas públicas de ensino fundamental e médio. Após responderem questionário de autopercepção (Condição de Produção Vocal do Professor - CPV-P para caracterização da amostra e levantamento de dados sobre autorreferência ao distúrbio de voz, foram submetidos à coleta de amostra de fala e exame nasofibrolaringoscópico. Para classificar as vozes, três juízes fonoaudiólogos utilizaram à escala GRBASI e, para pregas vocais (PPVV, um otorrinolaringologista descreveu as alterações encontradas. Os dados foram analisados descritivamente, e a seguir submetidos a testes de associação. RESULTADOS: No questionário, 63,3% dos participantes referiram ter ou ter tido distúrbio de voz. Do total, 43,3% foram diagnosticados com alteração em voz e 46,7%, em prega vocal. Não houve associação entre autorreferência e avaliação da voz, nem entre autorreferência e avaliação de PPVV, com registro de concordância baixa entre as três avaliações. Porém, houve associação entre a avaliação da voz e de PPVV, com concordância intermediária entre elas. CONCLUSÃO: Há maior autorreferência a distúrbio de voz do que o constatado pela avaliação perceptiva da voz e das pregas vocais. A concordância intermediária entre as duas avaliações prediz a necessidade da realização de pelo menos uma delas por ocasião da triagem em professores.PURPOSE: To analyze the presence of voice disorders in teachers in agreement between self-report, auditory-perceptive assessment of voice quality and vocal fold assessment. METHODS: The subjects of this cross-sectional study were 60 public elementary, middle and high-school teachers. After answering a self-awareness questionnaire (Voice Production Conditions of

  17. Minimal effects of visual memory training on auditory performance of adult cochlear implant users.

    Science.gov (United States)

    Oba, Sandra I; Galvin, John J; Fu, Qian-Jie

    2013-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users' speech and music perception. However, it is unclear whether posttraining gains in performance were due to improved auditory perception or to generally improved attention, memory, and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory, were assessed in 10 CI users before, during, and after training with a nonauditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Posttraining gains were much smaller with the nonauditory VDS training than observed in previous auditory training studies with CI users. The results suggest that posttraining gains observed in previous studies were not solely attributable to improved attention or memory and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception.

  18. Background Noise Degrades Central Auditory Processing in Toddlers.

    Science.gov (United States)

    Niemitalo-Haapola, Elina; Haapala, Sini; Jansson-Verkasalo, Eira; Kujala, Teija

    2015-01-01

    Noise, as an unwanted sound, has become one of modern society's environmental conundrums, and many children are exposed to higher noise levels than previously assumed. However, the effects of background noise on central auditory processing of toddlers, who are still acquiring language skills, have so far not been determined. The authors evaluated the effects of background noise on toddlers' speech-sound processing by recording event-related brain potentials. The hypothesis was that background noise modulates neural speech-sound encoding and degrades speech-sound discrimination. Obligatory P1 and N2 responses for standard syllables and the mismatch negativity (MMN) response for five different syllable deviants presented in a linguistic multifeature paradigm were recorded in silent and background noise conditions. The participants were 18 typically developing 22- to 26-month-old monolingual children with healthy ears. The results showed that the P1 amplitude was smaller and the N2 amplitude larger in the noisy conditions compared with the silent conditions. In the noisy condition, the MMN was absent for the intensity and vowel changes and diminished for the consonant, frequency, and vowel duration changes embedded in speech syllables. Furthermore, the frontal MMN component was attenuated in the noisy condition. However, noise had no effect on P1, N2, or MMN latencies. The results from this study suggest multiple effects of background noise on the central auditory processing of toddlers. It modulates the early stages of sound encoding and dampens neural discrimination vital for accurate speech perception. These results imply that speech processing of toddlers, who may spend long periods of daytime in noisy conditions, is vulnerable to background noise. In noisy conditions, toddlers' neural representations of some speech sounds might be weakened. Thus, special attention should be paid to acoustic conditions and background noise levels in children's daily environments

  19. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  20. The "Mozart effect": an electroencephalographic analysis employing the methods of induced event-related desynchronization/synchronization and event-related coherence.

    Science.gov (United States)

    Jausovec, Norbert; Habe, Katarina

    2003-01-01

    The event-related responses of 18 individuals were recorded while they were listening to 3 music clips of 6 s duration which were repeated 30 times each. The music clips differed in the level of their complex structure, induced mood, musical tempo and prominent frequency. They were taken from Mozart's sonata (K. 448), and Brahms' Hungarian dance (no. 5). The third clip was a simplified version of the theme taken from Haydn's symphony (no. 94) played by a computer synthesizer. Significant differences in induced event-related desynchronization between the 3 music clips were only observed in the lower-1 alpha band which is related to attentional processes. A similar pattern was observed for the coherence measures. While respondents listened to the Mozart clip, coherence in the lower alpha bands increased more, whereas in the gamma band a less pronounced increase was observed as compared with the Brahms and Haydn clips. The clustering of the three clips based on EEG measures distinguished between the Mozart clip on the one hand, and the Haydn and Brahms clips on the other, even though the Haydn and Brahms clips were at the opposite extremes with regard to the mood they induced in listeners, musical tempo, and complexity of structure. This would suggest that Mozart's music--with no regard to the level of induced mood, musical tempo and complexity--influences the level of arousal. It seems that modulations in the frequency domain of Mozart's sonata have the greatest influence on the reported neurophysiological activity.

  1. A comparison of recording modalities of P300 event-related potentials (ERP) for brain-computer interface (BCI) paradigm.

    Science.gov (United States)

    Mayaud, L; Congedo, M; Van Laghenhove, A; Orlikowski, D; Figère, M; Azabou, E; Cheliout-Heraut, F

    2013-10-01

    A brain-computer interface aims at restoring communication and control in severely disabled people by identification and classification of EEG features such as event-related potentials (ERPs). The aim of this study is to compare different modalities of EEG recording for extraction of ERPs. The first comparison evaluates the performance of six disc electrodes with that of the EMOTIV headset, while the second evaluates three different electrode types (disc, needle, and large squared electrode). Ten healthy volunteers gave informed consent and were randomized to try the traditional EEG system (six disc electrodes with gel and skin preparation) or the EMOTIV Headset first. Together with the six disc electrodes, a needle and a square electrode of larger surface were simultaneously recording near lead Cz. Each modality was evaluated over three sessions of auditory P300 separated by one hour. No statically significant effect was found for the electrode type, nor was the interaction between electrode type and session number. There was no statistically significant difference of performance between the EMOTIV and the six traditional EEG disc electrodes, although there was a trend showing worse performance of the EMOTIV headset. However, the modality-session interaction was highly significant (P<0.001) showing that, while the performance of the six disc electrodes stay constant over sessions, the performance of the EMOTIV headset drops dramatically between 2 and 3h of use. Finally, the evaluation of comfort by participants revealed an increasing discomfort with the EMOTIV headset starting with the second hour of use. Our study does not recommend the use of one modality over another based on performance but suggests the choice should be made on more practical considerations such as the expected length of use, the availability of skilled labor for system setup and above all, the patient comfort. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  2. Visual event-related potential studies supporting the validity of VARK learning styles' visual and read/write learners.

    Science.gov (United States)

    Thepsatitporn, Sarawin; Pichitpornchai, Chailerd

    2016-06-01

    The validity of learning styles needs supports of additional objective evidence. The identification of learning styles using subjective evidence from VARK questionnaires (where V is visual, A is auditory, R is read/write, and K is kinesthetic) combined with objective evidence from visual event-related potential (vERP) studies has never been investigated. It is questionable whether picture superiority effects exist in V learners and R learners. Thus, the present study aimed to investigate whether vERP could show the relationship between vERP components and VARK learning styles and to identify the existence of picture superiority effects in V learners and R learners. Thirty medical students (15 V learners and 15 R learners) performed recognition tasks with vERP and an intermediate-term memory (ITM) test. The results of within-group comparisons showed that pictures elicited larger P200 amplitudes than words at the occipital 2 site (P < 0.05) in V learners and at the occipital 1 and 2 sites (P < 0.05) in R learners. The between-groups comparison showed that P200 amplitudes elicited by pictures in V learners were larger than those of R learners at the parietal 4 site (P < 0.05). The ITM test result showed that a picture set showed distinctively more correct responses than that of a word set for both V learners (P < 0.001) and R learners (P < 0.01). In conclusion, the result indicated that the P200 amplitude at the parietal 4 site could be used to objectively distinguish V learners from R learners. A lateralization existed to the right brain (occipital 2 site) in V learners. The ITM test demonstrated the existence of picture superiority effects in both learners. The results revealed the first objective electrophysiological evidence partially supporting the validity of the subjective psychological VARK questionnaire study. Copyright © 2016 The American Physiological Society.

  3. Using event-related potentials to study perinatal nutrition and brain development in infants of diabetic mothers.

    Science.gov (United States)

    deRegnier, Raye-Ann; Long, Jeffrey D; Georgieff, Michael K; Nelson, Charles A

    2007-01-01

    Proper prenatal and postnatal nutrition is essential for optimal brain development and function. The early use of event-related potentials enables neuroscientists to study the development of cognitive function from birth and to evaluate the role of specific nutrients in development. Perinatal iron deficiency occurs in severely affected infants of diabetic mothers. In animal models, severe perinatal iron deficiency targets the explicit memory system of the brain. Cross-sectional ERP studies have shown that infants of diabetic mothers have impairments in recognition memory from birth through 8 months of age. The purpose of this study was to evaluate longitudinal development of recognition memory using ERPs in infants of diabetic mothers compared with control infants. Infants of diabetic mothers were divided into high and low risk status based upon their birth weights and iron status and compared with healthy control infants. Infants were tested in the newborn period for auditory recognition memory, at 6 months for visual recognition memory and at 8 months for cross modal memory. ERPs were evaluated for developmental changes in the slow waves that are thought to reflect memory and the Nc component that is thought to reflect attention. The results of the study showed differences in development between the IDMs and control infants in the development of the slow waves over the left anterior temporal leads and age-related patterns of development in the NC component. These results are consistent with animal models showing that perinatal iron deficiency affects the development of the memory networks of the brain. This study highlights the value of using ERPs to translate basic science information obtained from animal models to the development of the human infant.

  4. Speaking Two Languages Enhances an Auditory but Not a Visual Neural Marker of Cognitive Inhibition

    Directory of Open Access Journals (Sweden)

    Mercedes Fernandez

    2014-09-01

    Full Text Available The purpose of the present study was to replicate and extend our original findings of enhanced neural inhibitory control in bilinguals. We compared English monolinguals to Spanish/English bilinguals on a non-linguistic, auditory Go/NoGo task while recording event-related brain potentials. New to this study was the visual Go/NoGo task, which we included to investigate whether enhanced neural inhibition in bilinguals extends from the auditory to the visual modality. Results confirmed our original findings and revealed greater inhibition in bilinguals compared to monolinguals. As predicted, compared to monolinguals, bilinguals showed increased N2 amplitude during the auditory NoGo trials, which required inhibitory control, but no differences during the Go trials, which required a behavioral response and no inhibition. Interestingly, during the visual Go/NoGo task, event related brain potentials did not distinguish the two groups, and behavioral responses were similar between the groups regardless of task modality. Thus, only auditory trials that required inhibitory control revealed between-group differences indicative of greater neural inhibition in bilinguals. These results show that experience-dependent neural changes associated with bilingualism are specific to the auditory modality and that the N2 event-related brain potential is a sensitive marker of this plasticity.

  5. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  6. Effects of Grammatical Categories on Children's Visual Language Processing: Evidence from Event-Related Brain Potentials

    Science.gov (United States)

    Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III

    2006-01-01

    This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…

  7. Spoken sentence comprehension in aphasia: Event-related potential evidence for a lexical integration deficit

    NARCIS (Netherlands)

    Swaab, T.Y.; Brown, C.; Hagoort, P.

    1997-01-01

    In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line

  8. Tangential derivative mapping of axial MEG applied to event-related desynchronization research

    NARCIS (Netherlands)

    Bastiaansen, M.C.M.; Knösche, T.R.

    2000-01-01

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related

  9. Cognitive Association Formation in Episodic Memory: Evidence from Event-Related Potentials

    Science.gov (United States)

    Kim, Alice S. N.; Vallesi, Antonino; Picton, Terence W.; Tulving, Endel

    2009-01-01

    The present study focused on the processes underlying cognitive association formation by investigating subsequent memory effects. Event-related potentials were recorded as participants studied pairs of words, presented one word at a time, for later recall. The findings showed that a frontal-positive late wave (LW), which occurred 1-1.6 s after the…

  10. Event-Related EEG Oscillations to Semantically Unrelated Words in Normal and Learning Disabled Children

    Science.gov (United States)

    Fernandez, Thalia; Harmony, Thalia; Mendoza, Omar; Lopez-Alanis, Paula; Marroquin, Jose Luis; Otero, Gloria; Ricardo-Garcell, Josefina

    2012-01-01

    Learning disabilities (LD) are one of the most frequent problems for elementary school-aged children. In this paper, event-related EEG oscillations to semantically related and unrelated pairs of words were studied in a group of 18 children with LD not otherwise specified (LD-NOS) and in 16 children with normal academic achievement. We propose that…

  11. The effects of cortisol administration on approach-avoidance behavior: An event-related potential study

    NARCIS (Netherlands)

    Peer, J.M. van; Roelofs, K.; Rotteveel, M.; Dijk, J.G. van; Spinhoven, P.; Ridderinkhof, K.R.

    2007-01-01

    We investigated the effects of cortisol administration (50 mg) on approach and avoidance tendencies in low and high trait avoidant healthy young men. Event-related brain potentials (ERPs) were measured during a reaction time task, in which participants evaluated the emotional expression of

  12. Event-related Potentials Reflecting the Processing of Phonological Constraint Violations

    NARCIS (Netherlands)

    Domahs, Ulrike; Kehrein, Wolfgang; Knaus, Johannes; Wiese, Richard; Schlesewsky, Matthias

    2009-01-01

    Flow are violations of phonological constraints processed in word comprehension? The present article reports the results of ail event-related potentials (ERP) Study oil a phonological constraint of German that disallows identical segments within it syllable or word (CC(i)VC(i)). We examined three

  13. Event-related potentials reflecting the processing of phonological constraint violations

    NARCIS (Netherlands)

    Domahs, U.; Kehrein, W.; Knaus, J.; Wiese, R.; Schlesewsky, M.

    2009-01-01

    How are violations of phonological constraints processed in word comprehension? The present article reports the results of an event-related potentials (ERP) study on a phonological constraint of German that disallows identical segments within a syllable or word (CC iVCi). We examined three types of

  14. Snake scales, partial exposure, and the Snake Detection Theory: A human event-related potentials study

    NARCIS (Netherlands)

    J.W. van Strien (Jan); L.A. Isbell (Lynne A.)

    2017-01-01

    textabstractStudies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to

  15. An event-related brain potential correlate of visual short-term memory

    NARCIS (Netherlands)

    Klaver, Peter; Talsma, D.; Wijers, Albertus; Heinze, Hans-Jochen; Mulder, Gijsbertus

    1999-01-01

    EVENT-RELATED potentials (ERPs) were recorded as 12 subjects performed a delayed matching to sample task. We presented two bilateral abstract shapes and cued spatially which had to be memorized for a subsequent matching task: left, right or both. During memorization a posterior slow negative ERP

  16. Adapting to Changing Memory Retrieval Demands: Evidence from Event-Related Potentials

    Science.gov (United States)

    Benoit, Roland G.; Werkle-Bergner, Markus; Mecklinger, Axel; Kray, Jutta

    2009-01-01

    This study investigated preparatory processes involved in adapting to changing episodic memory retrieval demands. Event-related potentials (ERPs) were recorded while participants performed a general old/new recognition task and a specific task that also required retrieval of perceptual details. The relevant task remained either constant or changed…

  17. Attentional Mechanisms in Sports via Brain-Electrical Event-Related Potentials

    Science.gov (United States)

    Hack, Johannes; Memmert, Daniel; Rup, Andre

    2009-01-01

    In this study, we examined attention processes in complex, sport-specific decision-making tasks without interdependencies from anticipation. Psychophysiological and performance data recorded from advanced and intermediate level basketball referees were compared. Event-related potentials obtained while judging game situations in foul recognition…

  18. Complement Set Reference after Implicitly Small Quantities: An Event-Related Potentials Study

    Science.gov (United States)

    Ingram, Joanne; Ferguson, Heather J.

    2018-01-01

    An anaphoric reference to the complement-set is a reference to the set that does not fulfil the predicate of the preceding sentence. Preferred reference to the complement-set has been found in eye movements when a character's implicit desire for a high amount has been denied using a negative emotion. We recorded event-related potentials to examine…

  19. Temporal evolution of event-related desynchronization in acute stroke: A pilot study

    NARCIS (Netherlands)

    Tangwiriyasakul, Chayanin; Verhagen, Rens; Rutten, Wim; van Putten, Michel Johannes Antonius Maria

    2014-01-01

    Objective Assessment of event-related desynchronization (ERD) may assist in predicting recovery from stroke and rehabilitation, for instance in BCI applications. Here, we explore the temporal evolution of ERD during stroke recovery. Methods Ten stroke patients and eleven healthy controls were

  20. Conceptual Integration of Arithmetic Operations with Real-World Knowledge: Evidence from Event-Related Potentials

    Science.gov (United States)

    Guthormsen, Amy M.; Fisher, Kristie J.; Bassok, Miriam; Osterhout, Lee; DeWolf, Melissa; Holyoak, Keith J.

    2016-01-01

    Research on language processing has shown that the disruption of conceptual integration gives rise to specific patterns of event-related brain potentials (ERPs)--N400 and P600 effects. Here, we report similar ERP effects when adults performed cross-domain conceptual integration of analogous semantic and mathematical relations. In a problem-solving…

  1. Event-related potentials during visual selective attention in children of alcoholics.

    NARCIS (Netherlands)

    van der Stelt, O.; Gunning, W.B.; Snel, J.; Kok, A.

    1998-01-01

    Event-related potentials (ERPs) were recorded from 50 7-18 yr old children of alcoholics (COAs) and 50 age- and sex-matched control children while they performed a visual selective attention task. The task was to attend selectively to stimuli with a specified color (red or blue) in an attempt to

  2. MODULATION OF EVENT-RELATED POTENTIALS BY WORD REPETITION - THE ROLE OF VISUAL SELECTIVE ATTENTION

    NARCIS (Netherlands)

    OTTEN, LJ; RUGG, MD; DOYLE, MC

    1993-01-01

    Event-related potentials (ERPs) were recorded while subjects viewed visually presented words, some of which occurred twice. Each trial consisted of two colored letter strings, the requirement being to attend to and make a word/nonword discrimination for one of the strings. Attention was manipulated

  3. Declarative memory formation in hippocampal sclerosis: an intracranial event-related potentials study.

    NARCIS (Netherlands)

    Mormann, F.; Fernandez, G.S.E.; Klaver, P.; Weber, B.; Elger, C.E.; Fell, J.

    2007-01-01

    The functional deficits associated with hippocampal sclerosis during declarative memory formation are largely unknown. In this study, we analyzed intracranial event-related potentials recorded from the medial temporal lobes of nine epilepsy patients performing a word memorization task. We used

  4. Pitch Discrimination without Awareness in Congenital Amusia: Evidence from Event-Related Potentials

    Science.gov (United States)

    Moreau, Patricia; Jolicoeur, Pierre; Peretz, Isabelle

    2013-01-01

    Congenital amusia is a lifelong disorder characterized by a difficulty in perceiving and producing music despite normal intelligence and hearing. Behavioral data have indicated that it originates from a deficit in fine-grained pitch discrimination, and is expressed by the absence of a P3b event-related brain response for pitch differences smaller…

  5. Early referential context effects in sentence processing: Evidence from event-related brain potentials

    NARCIS (Netherlands)

    Berkum, J.J.A. van; Brown, C.M.; Hagoort, P.

    1999-01-01

    An event-related brain potentials experiment was carried out to examine the interplay of referential and structural factors during sentence processing in discourse. Subjects read (Dutch) sentences beginning like “David told the girl that … ” in short story contexts that had introduced either one or

  6. Event-related EEG changes preceding saccadic eye movements before and after dry immersion.

    Science.gov (United States)

    Tomilovskaya, E S; Kirenskaya, A V; Novototski-Vlasov, V Yu; Kozlovskaya, I B

    2004-07-01

    Objectives of this work were to quantify antisaccade characteristics, presaccadic slow negative EEG-potentials, and event-related EEG frequency band power (theta, alpha1, alpha2, beta1, beta2 and beta3) changes (ERD) in healthy volunteers before and after 6-day simulated weightlessness (dry immersion).

  7. Do U Txt? Event-Related Potentials to Semantic Anomalies in Standard and Texted English

    Science.gov (United States)

    Berger, Natalie I.; Coch, Donna

    2010-01-01

    Texted English is a hybrid, technology-based language derived from standard English modified to facilitate ease of communication via instant and text messaging. We compared semantic processing of texted and standard English sentences by recording event-related potentials in a classic semantic incongruity paradigm designed to elicit an N400 effect.…

  8. Temporal Dynamics of Late Second Language Acquisition: Evidence from Event-Related Brain Potentials

    Science.gov (United States)

    Steinhauer, Karsten; White, Erin J.; Drury, John E.

    2009-01-01

    The ways in which age of acquisition (AoA) may affect (morpho)syntax in second language acquisition (SLA) are discussed. We suggest that event-related brain potentials (ERPs) provide an appropriate online measure to test some such effects. ERP findings of the past decade are reviewed with a focus on recent and ongoing research. It is concluded…

  9. Implicit Phonological and Semantic Processing in Children with Developmental Dyslexia: Evidence from Event-Related Potentials

    Science.gov (United States)

    Jednorog, K.; Marchewka, A.; Tacikowski, P.; Grabowska, A.

    2010-01-01

    Dyslexia is characterized by a core phonological deficit, although recent studies indicate that semantic impairment also contributes to this condition. In this study, event-related potentials (ERP) were used to examine whether the N400 wave in dyslexic children is modulated by phonological or semantic priming, similarly to age-matched controls.…

  10. Scalp topography of event-related brain potentials and cognitive transitions during childhood.

    NARCIS (Netherlands)

    Molenaar, P.C.M.; van der Molen, M.W.; Stauder, J.E.A.

    1993-01-01

    Examined the relation between cognitive development (CGD) and the ontogenesis of event-related brain potentials (ERPs) during childhood among 48 girls (aged 5-7 yrs). The level of CGD was assessed with a standard Piagetian conservation kit. Ss performed a visual selective attention (oddball) task

  11. Brain activity and cognitive transition during childhood: A longitudinal event-related brain potential study.

    NARCIS (Netherlands)

    Stauder, J.E.A.; Molenaar, P.C.M.; van der Molen, M.W.

    1998-01-01

    Examined the relation between brain activation and cognitive development using event-related brain potentials (ERPs) and a longitudinal design. 5 yr old females performed a visual recognition ('oddball') task and an experimental analogue of the Piagetian conservation of liquid quantity task At three

  12. Representations in human visual short-term memory : an event-related brain potential study

    NARCIS (Netherlands)

    Klaver, P; Smid, HGOM; Heinze, HJ

    1999-01-01

    Behavioral measures and event-related potentials (ERPs) were recorded from 12 subjects while performing three delayed matching-to-sample tasks. The task instructions indicated whether stimulus locations, shapes or conjunctions of locations and shapes had to be memorized and matched against a probe.

  13. Effects of nicotine on visuo-spatial selective attention as indexed by event-related potentials.

    Science.gov (United States)

    Meinke, A; Thiel, C M; Fink, G R

    2006-08-11

    Nicotine has been shown to specifically reduce reaction times to invalidly cued targets in spatial cueing paradigms. In two experiments, we used event-related potentials to test whether the facilitative effect of nicotine upon the detection of invalidly cued targets is due to a modulation of perceptual processing, as indexed by early attention-related event-related potential components. Furthermore, we assessed whether the effect of nicotine on such unattended stimuli depends upon the use of exogenous or endogenous cues. In both experiments, the electroencephalogram was recorded while non-smokers completed discrimination tasks in Posner-type paradigms after chewing a nicotine polacrilex gum (Nicorette 2 mg) in one session and a placebo gum in another session. Nicotine reduced reaction times to invalidly cued targets when cueing was endogenous. In contrast, no differential effect of nicotine on reaction times was observed when exogenous cues were used. Electrophysiologically, we found a similar attentional modulation of the P1 and N1 components under placebo and nicotine but a differential modulation of later event-related potential components at a frontocentral site. The lack of a drug-dependent modulation of P1 and N1 in the presence of a behavioral effect suggests that the effect of nicotine in endogenous visuo-spatial cueing tasks is not due to an alteration of perceptual processes. Rather, the differential modulation of frontocentral event-related potentials suggests that nicotine acts at later stages of target processing.

  14. Contingent Attentional Capture by Top-Down Control Settings: Converging Evidence from Event-Related Potentials

    Science.gov (United States)

    Lien, Mei-Ching; Ruthruff, Eric; Goodin, Zachary; Remington, Roger W.

    2008-01-01

    Theories of attentional control are divided over whether the capture of spatial attention depends primarily on stimulus salience or is contingent on attentional control settings induced by task demands. The authors addressed this issue using the N2-posterior-contralateral (N2pc) effect, a component of the event-related brain potential thought to…

  15. Semantic ambiguity processing in sentence context: Evidence from event-related fMRI

    NARCIS (Netherlands)

    Zempleni, Monika-Zita; Renken, Remco; Hoeks, John C. J.; Hoogduin, Johannes M.; Stowe, Laurie A.

    2007-01-01

    Lexical semantic ambiguity is the phenomenon when a word has multiple meanings (e.g. 'bank'). The aim of this event-related functional MRI study was to identify those brain areas, which are involved in contextually driven ambiguity resolution. Ambiguous words were selected which have a most

  16. Working memory processes show different degrees of lateralization : Evidence from event-related potentials

    NARCIS (Netherlands)

    Talsma, D; Wijers, A.A.; Klaver, P; Mulder, G.

    This study aimed to identify different processes in working memory, using event-related potentials (ERPs) and response times. Abstract polygons were presented for memorization and subsequent recall in a delayed matching-to-sample paradigm. Two polygons were presented bilaterally for memorization and

  17. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    Science.gov (United States)

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  18. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  19. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  20. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  1. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  2. Nonspatial intermodal selective attention is mediated by sensory brain areas: Evidence from event-related potential.

    NARCIS (Netherlands)

    Talsma, D.; Kok, A.

    2001-01-01

    Focuses on the question of whether inter-and intramodal forms of attention are reflected in activation of the same or different brain areas. ERPs were recorded while Ss (aged 18-41 yrs) were presented a random sequence of visual and auditory stimuli. They were instructed to attend to nonspatial

  3. Nonspatial intermodal selective attention is mediated by sensory brain areas: Evidence from event-related potentials

    NARCIS (Netherlands)

    Talsma, D.; Kok, Albert

    2001-01-01

    The present study focuses on the question of whether inter- and intramodal forms of attention are reflected in activation of the same or different brain areas. ERPs were recorded while subjects were presented a random sequence of visual and auditory stimuli. They were instructed to attend to

  4. Evaluation of psychoacoustic tests and P300 event-related potentials in elderly patients with hyperhomocysteinemia.

    Science.gov (United States)

    Díaz-Leines, Sergio; Peñaloza-López, Yolanda R; Serrano-Miranda, Tirzo A; Flores-Ávalos, Blanca; Vidal-Ixta, Martha T; Jiménez-Herrera, Blanca

    2013-01-01

    Hyperhomocysteinemia as a risk factor for hearing impairment, neuronal damage and cognitive impairment in elderly patients is controversial and is limited by the small number of studies. The aim of this work was determine if elderly patients detected with hyperhomocysteinemia have an increased risk of developing abnormalities in the central auditory processes as compared with a group of patients with appropriate homocysteine levels, and to define the behaviour of psychoacoustic tests and long latency potentials (P300) in these patients. This was a cross-sectional, comparative and analytical study. We formed a group of patients with hyperhomocysteinemia and a control group with normal levels of homocysteine. All patients underwent audiometry, tympanometry and a selection of psychoacoustic tests (dichotic digits, low-pass filtered words, speech in noise and masking level difference), auditory evoked brainstem potentials and P300. Patients with hyperhomocysteinemia had higher values in the test of masking level difference than did the control group (P=.049) and more protracted latency in P300 (P=.000). Hyperhomocysteinemia is a factor that alters the central auditory functions. Alterations in psychoacoustic tests and disturbances in electrophysiological tests suggest that the central portion of the auditory pathway is affected in patients with hyperhomocysteinemia. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  5. P300 event-related potential as an indicator of inattentional deafness?

    Directory of Open Access Journals (Sweden)

    Louise Giraudet

    Full Text Available An analysis of airplane accidents reveals that pilots sometimes purely fail to react to critical auditory alerts. This inability of an auditory stimulus to reach consciousness has been coined under the term of inattentional deafness. Recent data from literature tends to show that tasks involving high cognitive load consume most of the attentional capacities, leaving little or none remaining for processing any unexpected information. In addition, there is a growing body of evidence for a shared attentional capacity between vision and hearing. In this context, the abundant information in modern cockpits is likely to produce inattentional deafness. We investigated this hypothesis by combining electroencephalographic (EEG measurements with an ecological aviation task performed under contextual variation of the cognitive load (high or low, including an alarm detection task. Two different audio tones were played: standard tones and deviant tones. Participants were instructed to ignore standard tones and to report deviant tones using a response pad. More than 31% of the deviant tones were not detected in the high load condition. Analysis of the EEG measurements showed a drastic diminution of the auditory P300 amplitude concomitant with this behavioral effect, whereas the N100 component was not affected. We suggest that these behavioral and electrophysiological results provide new insights on explaining the trend of pilots' failure to react to critical auditory information. Relevant applications concern prevention of alarms omission, mental workload measurements and enhanced warning designs.

  6. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  9. Behavioral and EEG evidence for auditory memory suppression

    Directory of Open Access Journals (Sweden)

    Maya Elizabeth Cano

    2016-03-01

    Full Text Available The neural basis of motivated forgetting using the Think/No-Think (TNT paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG effects of auditory Think/No-Think in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response times during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: 1 a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500ms, and 2 a sustained Think positivity over parietal electrodes beginning at approximately 600ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition.The observed event-related potential pattern and theta power analysis are similar to that reported in visual Think/No-Think studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  10. Behavioral and EEG Evidence for Auditory Memory Suppression.

    Science.gov (United States)

    Cano, Maya E; Knight, Robert T

    2016-01-01

    The neural basis of motivated forgetting using the Think/No-Think (TNT) paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG) effects of auditory TNT in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response time (RTs) during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: (1) a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500 ms); and (2) a sustained Think positivity over parietal electrodes beginning at approximately 600 ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz) power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition. The observed event-related potential pattern and theta power analysis are similar to that reported in visual TNT studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  11. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  12. The Recording and Quantification of Event-Related Potentials: II. Signal Processing and Analysis

    Directory of Open Access Journals (Sweden)

    Paniz Tavakoli

    2015-06-01

    Full Text Available Event-related potentials are an informative method for measuring the extent of information processing in the brain. The voltage deflections in an ERP waveform reflect the processing of sensory information as well as higher-level processing that involves selective attention, memory, semantic comprehension, and other types of cognitive activity. ERPs provide a non-invasive method of studying, with exceptional temporal resolution, cognitive processes in the human brain. ERPs are extracted from scalp-recorded electroencephalography by a series of signal processing steps. The present tutorial will highlight several of the analysis techniques required to obtain event-related potentials. Some methodological issues that may be encountered will also be discussed.

  13. Report order and identification of multidimensional stimuli: a study of event-related brain potentials.

    Science.gov (United States)

    Shieh, Kong-King; Shen, I-Hsuan

    2004-06-01

    An experiment was conducted to investigate the effect of order of report on multidimensional stimulus identification. Subjects were required to identify each two-dimensional symbol by pushing corresponding buttons on the keypad on which there were two columns representing the two dimensions. Order of report was manipulated for the dimension represented by the left or right column. Both behavioral data and event-related potentials were recorded from 14 college students. Behavioral data analysis showed that order of report had a significant effect on response times. Such results were consistent with those of previous studies. Analysis of event-related brain potentials showed significant differences in peak amplitude and mean amplitude at time windows of 120-250 msec. at Fz, F3, and F4 and of 350-750 msec. at Fz, F3, F4, Cz, and Pz. Data provided neurophysiological evidence that reporting dimensional values according to natural language habits was appropriate and less cognitively demanding.

  14. Glucose enhancement of event-related potentials associated with episodic memory and attention

    OpenAIRE

    Brown, Louise; Riby, Leigh

    2013-01-01

    Previous studies have reported that increasing glycaemia by a glucose-containing drink enhances memory functioning. The aim of the present study was to extend this literature by examining the effects of glucose on episodic memory as well as attention processes, and to investigate associated event-related potential (ERP) markers. Fifteen minutes after treatment (25 g glucose or placebo drink), 35 participants performed an old/new recognition memory task and a Stroop colour naming task. Consist...

  15. MEG event-related desynchronization and synchronization deficits during basic somatosensory processing in individuals with ADHD

    Directory of Open Access Journals (Sweden)

    Wang Frank

    2008-02-01

    Full Text Available Abstract Background Attention-Deficit/Hyperactivity Disorder (ADHD is a prevalent, complex disorder which is characterized by symptoms of inattention, hyperactivity, and impulsivity. Convergent evidence from neurobiological studies of ADHD identifies dysfunction in fronto-striatal-cerebellar circuitry as the source of behavioural deficits. Recent studies have shown that regions governing basic sensory processing, such as the somatosensory cortex, show abnormalities in those with ADHD suggesting that these processes may also be compromised. Methods We used event-related magnetoencephalography (MEG to examine patterns of cortical rhythms in the primary (SI and secondary (SII somatosensory cortices in response to median nerve stimulation, in 9 adults with ADHD and 10 healthy controls. Stimuli were brief (0.2 ms non-painful electrical pulses presented to the median nerve in two counterbalanced conditions: unpredictable and predictable stimulus presentation. We measured changes in strength, synchronicity, and frequency of cortical rhythms. Results Healthy comparison group showed strong event-related desynchrony and synchrony in SI and SII. By contrast, those with ADHD showed significantly weaker event-related desynchrony and event-related synchrony in the alpha (8–12 Hz and beta (15–30 Hz bands, respectively. This was most striking during random presentation of median nerve stimulation. Adults with ADHD showed significantly shorter duration of beta rebound in both SI and SII except for when the onset of the stimulus event could be predicted. In this case, the rhythmicity of S