WorldWideScience

Sample records for auditory perception event-related

  1. [Age differences of event-related potentials in the perception of successive and spacial components of auditory information].

    Science.gov (United States)

    Portnova, G V; Martynova, O V; Ivanitskiĭ, G A

    2014-01-01

    The perception of spatial and successive contexts of auditory information develops during human ontogeny. We compared event-related potentials (ERPs) recorded in 5- to 6-year-old children (N = 15) and adults (N = 15) in response to a digital series with omitted digits to explore age differences in the perception of successive auditory information. In addition, ERPs in response to the sound of falling drops delivered binaurally were obtained to examine the spatial context of auditory information. The ERPs obtained from the omitted digits significantly differed in the amplitude and latency of the N200 and P300 components between adults and children, which supports the hypothesis that the perception of a successive auditory structure is less automated in children compared with adults. Although no significant differences were found in adults, the sound of falling drops presented to the left ears of children elicited ERPs with earlier latencies and higher amplitudes of P300 and N400 components in the right temporal area. Stimulation of the right ear caused increasing amplitude of the N100 component in children. Thus, the observed differences in auditory ERPs of children and adults reflect developmental changes in the perception of spatial and successive auditory information.

  2. Perceiving temporal regularity in music: The role of auditory event-related potentials (ERPs) in probing beat perception

    NARCIS (Netherlands)

    Honing, H.; Bouwer, F.L.; Háden, G.P.; Merchant, H.; de Lafuente, V.

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in

  3. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  4. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  5. Stability of auditory event-related potentials in coma research.

    Science.gov (United States)

    Schorr, Barbara; Schlee, Winfried; Arndt, Marion; Lulé, Dorothée; Kolassa, Iris-Tatjana; Lopez-Rolon, Alex; Lopez-Rolon, Alexander; Bender, Andreas

    2015-02-01

    Patients with unresponsive wakefulness syndrome (UWS) or in minimally conscious state (MCS) after brain injury show significant fluctuations in their behavioural abilities over time. As the importance of event-related potentials (ERPs) in the detection of traces of consciousness increases, we investigated the retest reliability of ERPs with repeated tests at four different time points. Twelve healthy controls and 12 inpatients (8 UWS, 4 MCS; 6 traumatic, 6 non-traumatic) were tested twice a day (morning, afternoon) for 2 days with an auditory oddball task. ERPs were recorded with a 256-channel-EEG system, and correlated with behavioural test scores in the Coma Recovery Scale-revised (CRS-R). The number of identifiable P300 responses varied between zero and four in both groups. Reliabilities varied between Krippendorff's α = 0.43 for within-day comparison, and α = 0.25 for between-day comparison in the patient group. Retest reliability was strong for the CRS-R scores for all comparisons (α = 0.83-0.95). The stability of auditory information processing in patients with disorders of consciousness is the basis for other, even more demanding tasks and cognitive potentials. The relatively low ERP-retest reliability suggests that it is necessary to perform repeated tests, especially when probing for consciousness with ERPs. A single negative ERP test result may be mistaken for proof that a UWS patient truly is unresponsive.

  6. Deficient auditory processing in children with Asperger Syndrome, as indexed by event-related potentials.

    Science.gov (United States)

    Jansson-Verkasalo, Eira; Ceponiene, Rita; Kielinen, Marko; Suominen, Kalervo; Jäntti, Ville; Linna, Sirkka Liisa; Moilanen, Irma; Näätänen, Risto

    2003-03-06

    Asperger Syndrome (AS) is characterized by normal language development but deficient understanding and use of the intonation and prosody of speech. While individuals with AS report difficulties in auditory perception, there are no studies addressing auditory processing at the sensory level. In this study, event-related potentials (ERP) were recorded for syllables and tones in children with AS and in their control counterparts. Children with AS displayed abnormalities in transient sound-feature encoding, as indexed by the obligatory ERPs, and in sound discrimination, as indexed by the mismatch negativity. These deficits were more severe for the tone stimuli than for the syllables. These results indicate that auditory sensory processing is deficient in children with AS, and that these deficits might be implicated in the perceptual problems encountered by children with AS.

  7. The role of event-related brain potentials in assessing central auditory processing.

    Science.gov (United States)

    Alain, Claude; Tremblay, Kelly

    2007-01-01

    The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound.

  8. Evaluation of embryonic alcoholism from auditory event-related potential in fetal rats

    Institute of Scientific and Technical Information of China (English)

    梁勇; 王正敏; 屈卫东

    2004-01-01

    @@ Auditory event-related potential (AERP) is a kind of electroencephalography that measures the responses of perception, memory and judgement to special acoustic stimulation in the auditory cortex. AERP can be recorded with not only active but also passive mode. The active and passive recording modes of AERP have been shown a possible application in animals.1,2 Alcohol is a substance that can markedly affect the conscious reaction of human. Recently, AERP has been applied to study the effects of alcohol on the auditory centers of the brain. Some reports have shown dose-dependent differences in latency, amplitude, responsibility and waveform of AERP between persons who have and have not take in alcohol.3,4 The epidemiological investigations show that the central nervous function of the offspring of alcohol users might be also affected.5,6 Because the clinic research is limited by certain factors, several animal models have been applied to examine the influences of alcohol on consciousness with AERP. In the present study, young rats were exposed to alcohol during fetal development and AERP as indicator was recorded to monitor the central auditory function, and its mechanisms and characteristics of effects of the fetal alcoholism on auditory center function in rats were analyzed and discussed.

  9. Comparison of Auditory Event-Related Potential P300 in Sighted and Early Blind Individuals

    Directory of Open Access Journals (Sweden)

    Fatemeh Heidari

    2010-06-01

    Full Text Available Background and Aim: Following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In order to investigate this process, event-related potentials provide accurate information about time course neural activation as well as perception and cognitive processes. In this study, the latency and amplitude of auditory P300 were compared in sighted and early blind individuals in age range of 18-25 years old.Methods: In this cross-sectional study, auditory P300 potential was measured in conventional oddball paradigm by using two tone burst stimuli (1000 and 2000 Hz on 40 sighted subjects and 19 early blind subjects with mean age 20.94 years old.Results: The mean latency of P300 in early blind subjects was significantly smaller than sighted subjects (p=0.00.( There was no significant difference in amplitude between two groups (p>0.05.Conclusion: Reduced latency of P300 in early blind subjects in comparison to sighted subjects probably indicates the rate of automatic processing and information categorization is faster in early blind subjects because of sensory compensation. It seems that neural plasticity increases the rate of auditory processing and attention in early blind subjects.

  10. Infant Auditory Processing and Event-related Brain Oscillations

    Science.gov (United States)

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  11. Event-related potentials in response to 3-D auditory stimuli.

    Science.gov (United States)

    Fuchigami, Tatsuo; Okubo, Osami; Fujita, Yukihiko; Kohira, Ryutaro; Arakawa, Chikako; Endo, Ayumi; Haruyama, Wakako; Imai, Yuki; Mugishima, Hideo

    2009-09-01

    To evaluate auditory spatial cognitive function, age correlations for event-related potentials (ERPs) in response to auditory stimuli with a Doppler effect were studied in normal children. A sound with a Doppler effect is perceived as a moving audio image. A total of 99 normal subjects (age range, 4-21 years) were tested. In the task-relevant oddball paradigm, P300 and key-press reaction time were elicited using auditory stimuli (1000 Hz fixed and enlarged tones with a Doppler effect). From the age of 4 years, the P300 latency for the enlarged tone with a Doppler effect shortened more rapidly with age than did the P300 latency for tone-pips, and the latencies for the different conditions became similar towards the late teens. The P300 of auditory stimuli with a Doppler effect may be used to evaluate auditory spatial cognitive function in children.

  12. An introduction to the measurement of auditory event-related potentials (ERPs)

    National Research Council Canada - National Science Library

    Remijn, Gerard B; Hasuo, Emi; Fujihira, Haruna; Morimoto, Satoshi

    2014-01-01

    .... The events used by Davis were sounds, and in the decades that followed her landmark study ERP research significantly contributed to the knowledge of auditory perception and neurophysiology we have today...

  13. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    Science.gov (United States)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  14. Auditory Spatial Perception: Auditory Localization

    Science.gov (United States)

    2012-05-01

    psichici. Archives Fisiologia 1911, 9, 523–574. [Cited by Gulick et al. (1989)]. Aharonson, V.; Furst, M.; Levine, R. A.; Chaigrecht, M.; Korczyn...O.Olsson (eds.), Structure and Perception of Electroacoustic Sound and Music, pp. 83–99. Amsterdam, Holland, Excerpta Medica , 1989. Casseday, J. H.; Covey

  15. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  16. Intelligence and P3 Components of the Event-Related Potential Elicited during an Auditory Discrimination Task with Masking

    Science.gov (United States)

    De Pascalis, V.; Varriale, V.; Matteoli, A.

    2008-01-01

    The relationship between fluid intelligence (indexed by scores on Raven Progressive Matrices) and auditory discrimination ability was examined by recording event-related potentials from 48 women during the performance of an auditory oddball task with backward masking. High ability (HA) subjects exhibited shorter response times, greater response…

  17. A hierarchy of event-related potential markers of auditory processing in disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Steve Beukema

    2016-01-01

    Full Text Available Functional neuroimaging of covert perceptual and cognitive processes can inform the diagnoses and prognoses of patients with disorders of consciousness, such as the vegetative and minimally conscious states (VS;MCS. Here we report an event-related potential (ERP paradigm for detecting a hierarchy of auditory processes in a group of healthy individuals and patients with disorders of consciousness. Simple cortical responses to sounds were observed in all 16 patients; 7/16 (44% patients exhibited markers of the differential processing of speech and noise; and 1 patient produced evidence of the semantic processing of speech (i.e. the N400 effect. In several patients, the level of auditory processing that was evident from ERPs was higher than the abilities that were evident from behavioural assessment, indicating a greater sensitivity of ERPs in some cases. However, there were no differences in auditory processing between VS and MCS patient groups, indicating a lack of diagnostic specificity for this paradigm. Reliably detecting semantic processing by means of the N400 effect in passively listening single-subjects is a challenge. Multiple assessment methods are needed in order to fully characterise the abilities of patients with disorders of consciousness.

  18. Auditory event-related responses to diphthongs in different attention conditions

    DEFF Research Database (Denmark)

    Morris, David Jackson; Steinmetzger, Kurt; Tøndering, John

    2016-01-01

    The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2 = 1000 Hz) and subtle (F2 = 100 Hz) diphthongs, while subjects....... Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude...... was significant for components from a broader temporal window which included P1 latency and N1 amplitude. N1 latency did not vary between attention conditions, a finding that may be related to stimulation with a continuous vowel. These data show that a discernible P1-N1-P2 response can be observed to subtle vowel...

  19. Auditory event-related brain potentials for an early discrimination between normal and pathological brain aging

    Institute of Scientific and Technical Information of China (English)

    Juliana Dushanova; Mario Christov

    2013-01-01

    The brain as a system with gradually decreasing resources maximizes its chances by reorganizing neural networks to ensure efficient performance. Auditory event-related potentials were recorded in 28 healthy volunteers comprising 14 young and 14 elderly subjects in auditory discrimination motor task (low frequency tone – right hand movement and high frequency tone – left hand movement). The amplitudes of the sensory event-related potential components (N1, P2) were more pronounced with increasing age for either tone and this effect for P2 amplitude was more pronounced in the frontal region. The latency relationship of N1 between the groups was tone-dependent, while that of P2 was tone-independent with a prominent delay in the elderly group over all brain regions. The amplitudes of the cognitive components (N2, P3) diminished with increasing age and the hemispheric asymmetry of N2 (but not for P3) reduced with increasing age. Prolonged N2 latency with increasing age was widespread for either tone while between-group difference in P3 latency was tone-dependent. High frequency tone stimulation and movement requirements lead to P3 delay in the elderly group. The amplitude difference of the sensory components between the age groups could be due to a general greater alertness, less expressed habituation, or decline in the ability to retreat attentional resources from the stimuli in the elderly group. With aging, a neural circuit reorganization of the brain activity affects the cognitive processes. The approach used in this study is useful for an early discrimination between normal and pathological brain aging for early treatment of cognitive alterations and dementia.

  20. Event-related delta, theta, alpha and gamma correlates to auditory oddball processing during Vipassana meditation

    Science.gov (United States)

    Delorme, Arnaud; Polich, John

    2013-01-01

    Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958

  1. Nicotine enhances an auditory Event-Related Potential component which is inversely related to habituation.

    Science.gov (United States)

    Veltri, Theresa; Taroyan, Naira; Overton, Paul G

    2017-07-01

    Nicotine is a psychoactive substance that is commonly consumed in the context of music. However, the reason why music and nicotine are co-consumed is uncertain. One possibility is that nicotine affects cognitive processes relevant to aspects of music appreciation in a beneficial way. Here we investigated this possibility using Event-Related Potentials. Participants underwent a simple decision-making task (to maintain attentional focus), responses to which were signalled by auditory stimuli. Unlike previous research looking at the effects of nicotine on auditory processing, we used complex tones that varied in pitch, a fundamental element of music. In addition, unlike most other studies, we tested non-smoking subjects to avoid withdrawal-related complications. We found that nicotine (4.0 mg, administered as gum) increased P2 amplitude in the frontal region. Since a decrease in P2 amplitude and latency is related to habituation processes, and an enhanced ability to disengage from irrelevant stimuli, our findings suggest that nicotine may cause a reduction in habituation, resulting in non-smokers being less able to adapt to repeated stimuli. A corollary of that decrease in adaptation may be that nicotine extends the temporal window during which a listener is able and willing to engage with a piece of music.

  2. Event related potentials elicited by violations of auditory regularities in patients with impaired consciousness.

    Science.gov (United States)

    Faugeras, Frédéric; Rohaut, Benjamin; Weiss, Nicolas; Bekinschtein, Tristan; Galanaud, Damien; Puybasset, Louis; Bolgert, Francis; Sergent, Claire; Cohen, Laurent; Dehaene, Stanislas; Naccache, Lionel

    2012-02-01

    Improving our ability to detect conscious processing in non communicating patients remains a major goal of clinical cognitive neurosciences. In this perspective, several functional brain imaging tools are currently under development. Bedside cognitive event-related potentials (ERPs) derived from the EEG signal are a good candidate to explore consciousness in these patients because: (1) they have an optimal time resolution within the millisecond range able to monitor the stream of consciousness, (2) they are fully non-invasive and relatively cheap, (3) they can be recorded continuously on dedicated individual systems to monitor consciousness and to communicate with patients, (4) and they can be used to enrich patients' autonomy through brain-computer interfaces. We recently designed an original auditory rule extraction ERP test that evaluates cerebral responses to violations of temporal regularities that are either local in time or global across several seconds. Local violations led to an early response in auditory cortex, independent of attention or the presence of a concurrent visual task, while global violations led to a late and spatially distributed response that was only present when subjects were attentive and aware of the violations. In the present work, we report the results of this test in 65 successive recordings obtained at bedside from 49 non-communicating patients affected with various acute or chronic neurological disorders. At the individual level, we confirm the high specificity of the 'global effect': only conscious patients presented this proposed neural signature of conscious processing. Here, we also describe in details the respective neural responses elicited by violations of local and global auditory regularities, and we report two additional ERP effects related to stimuli expectancy and to task learning, and we discuss their relations to consciousness.

  3. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  4. Auditory event-related potentials as indicators of good prognosis in coma of non-anoxic etiology

    OpenAIRE

    2010-01-01

    The aim of this study is to evaluate whether auditory event-related potentials can predict the prognosis of recovery from coma resulting from different etiologies. The results of this study could then be used as an adjuvant test in helping the clinician evaluate patients in coma. We performed P300 auditory event-related potentials on 21 patients who developed a state of coma at our institution. We compared the results to the Glasgow coma scale at the onset of coma, on day 3, and day 21. We...

  5. Distinct features of auditory steady-state responses as compared to transient event-related potentials.

    Directory of Open Access Journals (Sweden)

    Li Zhang

    Full Text Available Transient event-related potentials (ERPs and steady-state responses (SSRs have been popularly employed to investigate the function of the human brain, but their relationship still remains a matter of debate. Some researchers believed that SSRs could be explained by the linear summation of successive transient ERPs (superposition hypothesis, while others believed that SSRs were the result of the entrainment of a neural rhythm driven by the periodic repetition of a sensory stimulus (oscillatory entrainment hypothesis. In the present study, taking auditory modality as an example, we aimed to clarify the distinct features of SSRs, evoked by the 40-Hz and 60-Hz periodic auditory stimulation, as compared to transient ERPs, evoked by a single click. We observed that (1 SSRs were mainly generated by phase synchronization, while late latency responses (LLRs in transient ERPs were mainly generated by power enhancement; (2 scalp topographies of LLRs in transient ERPs were markedly different from those of SSRs; (3 the powers of both 40-Hz and 60-Hz SSRs were significantly correlated, while they were not significantly correlated with the N1 power in transient ERPs; (4 whereas SSRs were dominantly modulated by stimulus intensity, middle latency responses (MLRs were not significantly modulated by both stimulus intensity and subjective loudness judgment, and LLRs were significantly modulated by subjective loudness judgment even within the same stimulus intensity. All these findings indicated that high-frequency SSRs were different from both MLRs and LLRs in transient ERPs, thus supporting the possibility of oscillatory entrainment hypothesis to the generation of SSRs. Therefore, SSRs could be used to explore distinct neural responses as compared to transient ERPs, and help us reveal novel and reliable neural mechanisms of the human brain.

  6. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  7. Cognitive processing effects on auditory event-related potentials and the evoked cardiac response.

    Science.gov (United States)

    Lawrence, Carlie A; Barry, Robert J

    2010-11-01

    The phasic evoked cardiac response (ECR) produced by innocuous stimuli requiring cognitive processing may be described as the sum of two independent response components. An initial heart rate (HR) deceleration (ECR1), and a slightly later HR acceleration (ECR2), have been hypothesised to reflect stimulus registration and cognitive processing load, respectively. This study investigated the effects of processing load in the ECR and the event-related potential, in an attempt to find similarities between measures found important in the autonomic orienting reflex context and ERP literature. We examined the effects of cognitive load within-subjects, using a long inter-stimulus interval (ISI) ANS-style paradigm. Subjects (N=40) were presented with 30-35 80dB, 1000Hz tones with a variable long ISI (7-9s), and required to silently count, or allowed to ignore, the tone in two counterbalanced stimulus blocks. The ECR showed a significant effect of counting, allowing separation of the two ECR components by subtracting the NoCount from the Count condition. The auditory ERP showed the expected obligatory processing effects in the N1, and substantial effects of cognitive load in the late positive complex (LPC). These data offer support for ANS-CNS connections worth pursuing further in future work.

  8. Auditory P300 Event-Related Potentials in Children with Sydenham?s Chorea

    Directory of Open Access Journals (Sweden)

    Hasan Hüseyin Ozdemir

    2014-08-01

    Full Text Available P300 event-related potentials (ERPs, objective measures related to cognitive processing, have not been studied in Sydenham’s chorea (SC patients. Purpose: To assess cognitive impairment with P300 ERPs. Method: Seventeen patients with SC and 20 unaffected healthy children were included. Stanford–Binet test was used for psychometric assessment, and odd-ball paradigm was used for auditory ERPs. Results: There was no significant difference in P300 latencies between the SC-pretreatment group, SC-posttreatment group and control group (p>0.05. Mean interpeak latencies in SC-pretreatment group and SC-posttreatment group showed significant prolongation compared with the control group (p<0.05. Mean interpeak latencies in SC-posttreatment group were significantly decreased compared with SC-pretreatment group (p<0.05. Compared to controls, patients did not show significant difference in Stanford-Binet intelligence examination. Conclusion: This report suggests that interpeak latencies and amplitudes of P300 ERPs could be useful for detecting and monitoring cognitive impairment in SC patients.

  9. Neural effects of nicotine during auditory selective attention in smokers: an event-related potential study.

    Science.gov (United States)

    Knott, Verner; Blais, Crystal; Scherling, Carole; Camarda, Jordan; Millar, Anne; Fisher, Derek; McIntosh, Judy

    2006-01-01

    Acute nicotine has been found to improve task performance in smokers after smoking abstinence, but the attentional processes mediating these improvements are unclear. Since scalp-recorded event-related potentials (ERPs) have been shown to be sensitive indicators of selective attention, the effects of acutely administered nicotine were examined on ERPs and concomitant behavioural performance measures in an auditory selective attention task. Ten (6 males) overnight smoking-abstinent cigarette smokers received nicotine gum (4 mg) in a randomized, double-blind, placebo-controlled, crossover design. In a dichotic listening task [which required participants to attend and detect (target) deviant stimuli in one ear and to ignore similar stimuli in the other ear] which included ERP recordings and assessment of response speed and accuracy measures, nicotine gum failed to alter behavioural performance or amplitudes of ERP components sensitive to selective attention [reflected in the N100 and negative difference (Nd) component] or to pre-attentive detection of acoustic change [reflected in the mismatch negativity (MMN) component]. However, nicotine did influence the speed of these voluntary selective processes, as reflected by shortened latencies of the early Nd component. The findings are discussed in relation to the stimulus filter theory of smoking, and with respect to nicotine's actions on involuntary and controlled aspects of selective attention processes. Copyright (c) 2006 S. Karger AG, Basel.

  10. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  11. Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.

    Science.gov (United States)

    Goller, Aviva I; Otten, Leun J; Ward, Jamie

    2009-10-01

    In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa.

  12. Saturation of auditory short-term memory causes a plateau in the sustained anterior negativity event-related potential.

    Science.gov (United States)

    Alunni-Menichini, Kristelle; Guimond, Synthia; Bermudez, Patrick; Nolden, Sophie; Lefebvre, Christine; Jolicoeur, Pierre

    2014-12-10

    The maintenance of information in auditory short-term memory (ASTM) is accompanied by a sustained anterior negativity (SAN) in the event-related potential measured during the retention interval of simple auditory memory tasks. Previous work on ASTM showed that the amplitude of the SAN increased in negativity as the number of maintained items increases. The aim of the current study was to measure the SAN and observe its behavior beyond the point of saturation of auditory short-term memory. We used atonal pure tones in sequences of 2, 4, 6, or 8t. Our results showed that the amplitude of SAN increased in negativity from 2 to 4 items and then levelled off from 4 to 8 items. Behavioral results suggested that the average span in the task was slightly below 3, which was consistent with the observed plateau in the electrophysiological results. Furthermore, the amplitude of the SAN predicted individual differences in auditory memory capacity. The results support the hypothesis that the SAN is an electrophysiological index of brain activity specifically related to the maintenance of auditory information in ASTM. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  14. Impairment in predictive processes during auditory mismatch negativity in ScZ : Evidence from event-related fields

    NARCIS (Netherlands)

    Sauer, Andreas; Zeev-Wolf, Maor; Grent-'t-Jong, Tineke; Recasens, Marc; Wacongne, C.; Wibral, Michael; Helbling, Saskia; Peled, Abraham; Grinshpoon, Alexander; Singer, Wolf; Goldstein, Abraham; Uhlhaas, Peter J

    2017-01-01

    Patients with schizophrenia (ScZ) show pronounced dysfunctions in auditory perception but the underlying mechanisms as well as the localization of the deficit remain unclear. To examine these questions, the current study examined whether alterations in the neuromagnetic mismatch negativity (MMNm) in

  15. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  16. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder.

    Science.gov (United States)

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-26

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD.

  17. The mismatch-negativity (MMN) component of the auditory event-related potential to violations of abstract regularities: a review.

    Science.gov (United States)

    Paavilainen, Petri

    2013-05-01

    The mismatch-negativity (MMN) component of the event-related potential (ERP) has been extensively used to study the preattentive processing and storage of regularities in basic physical stimulus features (e.g., frequency, intensity, spatial location). However, studies reviewed in the present article reveal that the auditory analysis reflected by MMN also includes the detection and use of more complex, "abstract", regularities based, for example, on relationships between various physical features of the stimuli or in patterns present in the auditory stream. When these regularities are violated, then MMN is elicited. Thus, the central auditory system performs even at the pre-attentive, auditory-cortex level surprisingly "cognitive" operations, such as generalization leading to simple concept formation, rule extraction and prediction of future stimuli. The information extracted often seems to be in an implicit form, not directly available to conscious processes and difficult to express verbally. It can nevertheless influence the behavior of the subject, for example, the regularity violations can temporarily impair performance in the primary task. Neural, behavioral and cognitive events associated with the development of the regularity representations are discussed.

  18. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder

    Science.gov (United States)

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  19. Auditory event-related response in visual cortex modulates subsequent visual responses in humans.

    Science.gov (United States)

    Naue, Nicole; Rach, Stefan; Strüber, Daniel; Huster, Rene J; Zaehle, Tino; Körner, Ursula; Herrmann, Christoph S

    2011-05-25

    Growing evidence from electrophysiological data in animal and human studies suggests that multisensory interaction is not exclusively a higher-order process, but also takes place in primary sensory cortices. Such early multisensory interaction is thought to be mediated by means of phase resetting. The presentation of a stimulus to one sensory modality resets the phase of ongoing oscillations in another modality such that processing in the latter modality is modulated. In humans, evidence for such a mechanism is still sparse. In the current study, the influence of an auditory stimulus on visual processing was investigated by measuring the electroencephalogram (EEG) and behavioral responses of humans to visual, auditory, and audiovisual stimulation with varying stimulus-onset asynchrony (SOA). We observed three distinct oscillatory EEG responses in our data. An initial gamma-band response around 50 Hz was followed by a beta-band response around 25 Hz, and a theta response around 6 Hz. The latter was enhanced in response to cross-modal stimuli as compared to either unimodal stimuli. Interestingly, the beta response to unimodal auditory stimuli was dominant in electrodes over visual areas. The SOA between auditory and visual stimuli--albeit not consciously perceived--had a modulatory impact on the multisensory evoked beta-band responses; i.e., the amplitude depended on SOA in a sinusoidal fashion, suggesting a phase reset. These findings further support the notion that parameters of brain oscillations such as amplitude and phase are essential predictors of subsequent brain responses and might be one of the mechanisms underlying multisensory integration.

  20. Effects of second language study of phonemic discrimination and auditory event-related potentials in adults.

    Science.gov (United States)

    Grubb, J D; Bush, A M; Geist, C R

    1998-10-01

    This study was designed to investigate the effects of acquisition of a second language on auditory even-related brain potentials and discrimination of foreign language phonemes by 36 women (ages 18 to 47 years), and 25 men (ages 18 to 36 years) and of varying linguistic background, in response to synthetic versions of Japanese phonemes. Subjects were subsequently tested on discrimination between spoken Japanese phonemes. Analysis indicated that the men and women differed in phonological processing and in the way acquisition of the second language affected phonological processing.

  1. The Use of Auditory Event-Related Potentials in Alzheimer's Disease Diagnosis

    Directory of Open Access Journals (Sweden)

    Fabrizio Vecchio

    2011-01-01

    Full Text Available Event-related potentials (ERPs are important clinical and research instruments in neuropsychiatry, particularly due to their strategic role for the investigation of brain function. These techniques are often underutilized in the evaluation of neurological and psychiatric disorders, but ERPs are noninvasive instruments that directly reflect cortical neuronal activity. Previous studies using the P300, P3a, and MMN components of the ERP to study dementing illness are reviewed. The results suggest that particularly the P300 brain potential is sensitive to Alzheimer's disease processes during its early stages, and that easily performed stimulus discrimination tasks are the clinically most useful. Finally, these data suggest that the P300 ERP can aid in the diagnosis of dementia and may help in the assessment of early Alzheimer's disease.

  2. Sensitivity of P300 auditory event-related potentials for assessing cognitive impairment in elderly type 2 diabetic patients

    Institute of Scientific and Technical Information of China (English)

    Hong Yang; Junhong She; Xianfu Lu; Rihong Peng

    2008-01-01

    BACKGROUND: In previous studies, cognitive function in elderly type 2 diabetic patients was evaluated by psychometric tests. These studies have confirmed that P300 event-related potential is an objective way of assessing cognitive function.OBJECTIVE: To analyze the objectivity of P300 for assessment of cognitive function in elderly type 2diabetic patients.DESIGN, TIME AND SETTING: This case-control experiment was performed at the Department of Endocrinology of the Fourth Affiliated Hospital, Guangxi Medical University from January 2004 to December 2006.PARTICIPANTS: Seventy-two patients (38 males and 34 females) with type 2 diabetes mellitus were enrolled in this study. The patients were divided according to those with diabetes alone (diabetes alone group) (n=38) and those with diabetes and cerebral ischemia (diabetes and cerebral ischemia group)(n=34). A further 31 healthy individuals (16 males and 15 females), who received health examinations over the same period, were included as normal controls (normal control group).METHODS: All subjects were assessed by Mini-Mental State Examination (MMSE). Abnormalities in cognitive functions were identified by analyzing the auditory P300 event-related potentials.MAIN OUTCOME MEASURES: Auditory event-related potentials and MMSE scores. Multiple linear regression analysis was conducted using the "enter method" with the 72 elderly patients with type 2diabetes mellitus. P3 latency, P3 amplitude and N2 latency served as dependent variables. Age, sex,education, course of the disease, glycosylated hemoglobin, and ischemic brain damage were used as independent variables.RESULTS: No significant difference in scores of MMSE was detected between the diabetes alone and normal control groups (P > 0.05). MMSE score was significantly lower in the diabetes and cerebral ischemia group (P < 0.01) than in the normal control group. N2 and P3 latencies of auditory event-related potential were significantly longer, and P3 amplitude was

  3. Event-related EEG power modulations and phase connectivity indicate the focus of attention in an auditory own name paradigm.

    Science.gov (United States)

    Lechinger, Julia; Wielek, Tomasz; Blume, Christine; Pichler, Gerald; Michitsch, Gabriele; Donis, Johann; Gruber, Walter; Schabus, Manuel

    2016-08-01

    Estimating cognitive abilities in patients suffering from Disorders of Consciousness remains challenging. One cognitive task to address this issue is the so-called own name paradigm, in which subjects are presented with first names including the own name. In the active condition, a specific target name has to be silently counted. We recorded EEG during this task in 24 healthy controls, 8 patients suffering from Unresponsive Wakefulness Syndrome (UWS) and 7 minimally conscious (MCS) patients. EEG was analysed with respect to amplitude as well as phase modulations and connectivity. Results showed that general reactivity in the delta, theta and alpha frequency (event-related de-synchronisation, ERS/ERD, and phase locking between trials and electrodes) toward auditory stimulation was higher in controls than in patients. In controls, delta ERS and lower alpha ERD indexed the focus of attention in both conditions, late theta ERS only in the active condition. Additionally, phase locking between trials and delta phase connectivity was highest for own names in the passive and targets in the active condition. In patients, clear stimulus-specific differences could not be detected. However, MCS patients could reliably be differentiated from UWS patients based on their general event-related delta and theta increase independent of the type of stimulus. In conclusion, the EEG signature of the active own name paradigm revealed instruction-following in healthy participants. On the other hand, DOC patients did not show clear stimulus-specific processing. General reactivity toward any auditory input, however, allowed for a reliable differentiation between MCS and UWS patients.

  4. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  5. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children.

    Science.gov (United States)

    Badcock, Nicholas A; Preece, Kathryn A; de Wit, Bianca; Glenn, Katharine; Fieder, Nora; Thie, Johnson; McArthur, Genevieve

    2015-01-01

    Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG) system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs) in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children. Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under "passive" and "active" listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz) and 100 deviant (1,200 Hz) tones. In the active condition, they listened to the same stimuli, and were asked to count the number of 'high' (i.e., deviant) tones. Results. Intraclass correlations (ICCs) indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95) in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74). There were few differences between peak amplitude and latency estimates for the two systems. Conclusions. An adapted EPOC EEG system can be used to index children's late auditory ERP peaks (i.e., P1, N1, P2, N2, P3) and their MMN ERP component.

  6. Auditory evoked potentials in young patients with Down syndrome. Event-related potentials (P3) and histaminergic system.

    Science.gov (United States)

    Seidl, R; Hauser, E; Bernert, G; Marx, M; Freilinger, M; Lubec, G

    1997-06-01

    Subjects with Down syndrome exhibit various types of cognitive impairment. Besides abnormalities in a number of neurotransmitter systems (e.g. cholinergic), histaminergic deficits have recently been identified. Brainstem auditory evoked potentials (BAEPs) and auditory event-related potentials (ERPs), were recorded from 10 children (aged 11-20 years) with Down syndrome and from 10 age- and sex-matched healthy control subjects. In Down subjects, BAEPs revealed shortened latencies for peaks III and V with shortened interpeak latencies I-III and I-V. ERPs showed a delay of components N1, P2, N2 and P3. In addition, subjects with Down syndrome failed to show P3 amplitude reduction during repeated stimulation. To evaluate the cognitive effects of histaminergic dysfunction, ERPs were recorded from 12 healthy adults (aged 20-28 years) before and after antihistaminergic intervention (pheniramine) compared to placebo. Whereas components N1, P2, N2 remained unchanged after H1-receptor antagonism, P3 latency increased and P3 amplitude showed no habituation in response to repeated stimulation. The results suggest that the characteristic neurofunctional abnormalities present in children with Down syndrome must be the consequence of a combination of structural and neurochemical aberrations. The second finding was that antihistaminergic treatment affects information processing tested by ERPs similar to that seen with anticholinergic treatment.

  7. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  8. Long-term neurocognitive outcome and auditory event-related potentials after complex febrile seizures in children.

    Science.gov (United States)

    Tsai, Min-Lan; Hung, Kun-Long; Tsan, Ying-Ying; Tung, William Tao-Hsin

    2015-06-01

    Whether prolonged or complex febrile seizures (FS) produce long-term injury to the hippocampus is a critical question concerning the neurocognitive outcome of these seizures. Long-term event-related evoked potential (ERP) recording from the scalp is a noninvasive technique reflecting the sensory and cognitive processes associated with attention tasks. This study aimed to investigate the long-term outcome of neurocognitive and attention functions and evaluated auditory event-related potentials in children who have experienced complex FS in comparison with other types of FS. One hundred and forty-seven children aged more than 6 years who had experienced complex FS, simple single FS, simple recurrent FS, or afebrile seizures (AFS) after FS and age-matched healthy controls were enrolled. Patients were evaluated with Wechsler Intelligence Scale for Children (WISC; Chinese WISC-IV) scores, behavior test scores (Chinese version of Conners' continuous performance test, CPT II V.5), and behavior rating scales. Auditory ERPs were recorded in each patient. Patients who had experienced complex FS exhibited significantly lower full-scale intelligence quotient (FSIQ), perceptual reasoning index, and working memory index scores than did the control group but did not show significant differences in CPT scores, behavior rating scales, or ERP latencies and amplitude compared with the other groups with FS. We found a significant decrease in the FSIQ and four indices of the WISC-IV, higher behavior rating scales, a trend of increased CPT II scores, and significantly delayed P300 latency and reduced P300 amplitude in the patients with AFS after FS. We conclude that there is an effect on cognitive function in children who have experienced complex FS and patients who developed AFS after FS. The results indicated that the WISC-IV is more sensitive in detecting cognitive abnormality than ERP. Cognition impairment, including perceptual reasoning and working memory defects, was identified in

  9. Validation of the Emotiv EPOC EEG system for research quality auditory event-related potentials in children

    Directory of Open Access Journals (Sweden)

    Nicholas A. Badcock

    2015-04-01

    Full Text Available Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children.Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under “passive” and “active” listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz and 100 deviant (1,200 Hz tones. In the active condition, they listened to the same stimuli, and were asked to count the number of ‘high’ (i.e., deviant tones.Results. Intraclass correlations (ICCs indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95 in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74. There were few differences between peak amplitude and latency estimates for the two systems.Conclusions. An adapted EPOC EEG system can be used to index children’s late auditory ERP peaks (i.e., P1, N1, P2, N2, P3 and their MMN ERP component.

  10. Effects of acute nicotine on event-related potential and performance indices of auditory distraction in nonsmokers.

    Science.gov (United States)

    Knott, Verner J; Bolton, Kiley; Heenan, Adam; Shah, Dhrasti; Fisher, Derek J; Villeneuve, Crystal

    2009-05-01

    Although nicotine has been purported to enhance attentional processes, this has been evidenced mostly in tasks of sustained attention, and its effects on selective attention and attentional control under conditions of distraction are less convincing. This study investigated the effects of nicotine on distractibility in 21 (11 males) nonsmokers with event-related potentials (ERPs) and behavioral performance measures extracted from an auditory discrimination task requiring a choice reaction time response to short- and long-duration tones, with and without imbedded deviants. Administered in a randomized, double-blind, placebo-controlled crossover design, nicotine gum (6 mg) failed to counter deviant-elicited behavioral distraction characterized by longer reaction times and increased response errors. Of the deviant-elicited ERP components, nicotine did not alter the P3a-indexed attentional switching to the deviant, but in females, it tended to diminish the automatic processing of the deviant as shown by a smaller mismatch negativity component, and it attenuated attentional reorienting following deviant-elicited distraction, as reflected by a reduced reorienting negativity ERP component. Results are discussed in relation to attentional models of nicotine and with respect to future research directions.

  11. Hippocampal P3-Like Auditory Event-Related Potentials are Disrupted in a Rat Model of Cholinergic Degeneration in Alzheimer's Disease: Reversal by Donepezil Treatment

    DEFF Research Database (Denmark)

    Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe

    2014-01-01

    P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related...

  12. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials.

    Science.gov (United States)

    Orekhova, Elena V; Stroganova, Tatiana A

    2014-01-01

    The extended phenotype of autism spectrum disorders (ASD) includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs) and magnetic fields (ERFs) may help to reveal which processing stages underlying brain response to unattended but salient sensory event are affected in individuals with ASD. Previous research focusing on two sequential stages of the brain response-automatic detection of physical changes in auditory stream, indexed by mismatch negativity (MMN), and evaluation of stimulus novelty, indexed by P3a component,-found in individuals with ASD either increased, decreased, or normal processing of deviance and novelty. The review examines these apparently conflicting results, notes gaps in previous findings, and suggests a potentially unifying hypothesis relating the dampened responses to unattended sensory events to the deficit in rapid arousal process. Specifically, "sensory gating" studies focused on pre-attentive arousal consistently demonstrated that brain response to unattended and temporally novel sound in ASD is already affected at around 100 ms after stimulus onset. We hypothesize that abnormalities in nicotinic cholinergic arousal pathways, previously reported in individuals with ASD, may contribute to these ERP/ERF aberrations and result in attention re-orienting deficit. Such cholinergic dysfunction may be present in individuals with ASD early in life and can influence both sensory processing and attention re-orienting behavior. Identification of early neurophysiological biomarkers for cholinergic deficit would help to detect infants "at risk" who can potentially benefit from particular types of therapies or interventions.

  13. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials

    Directory of Open Access Journals (Sweden)

    Elena V Orekhova

    2014-02-01

    Full Text Available The extended phenotype of autism spectrum disorders (ASD includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs and magnetic fields (ERFs may help to reveal which processing stages underlying brain response to unattended but salient sensory event are affected in individuals with ASD. Previous research focusing on two sequential stages of the brain response - automatic detection of physical changes in auditory stream, indexed by mismatch negativity (MMN, and evaluation of stimulus novelty, indexed by P3a component, - found in individuals with ASD either increased, decreased or normal processing of deviance and novelty. The review examines these apparently conflicting results, notes gaps in previous findings, and suggests a potentially unifying hypothesis relating the dampened responses to unattended sensory events to the deficit in rapid arousal process. Specifically, ‘sensory gating’ studies focused on pre-attentive arousal consistently demonstrated that brain response to unattended and temporally novel sound in ASD is already affected at around 100 ms after stimulus onset. We hypothesize that abnormalities in nicotinic cholinergic arousal pathways, previously reported in individuals with ASD, may contribute to these ERP/ERF aberrations and result in attention re-orienting deficit. Such cholinergic dysfunction may be present in individuals with ASD early in life and can influence both sensory processing and attention re-orienting behavior. Identification of early neurophysiological biomarkers for cholinergic deficit would help to detect infants at risk who can potentially benefit from particular types of therapies or interventions.

  14. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Directory of Open Access Journals (Sweden)

    Christina M. Karns

    2015-06-01

    Full Text Available Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  15. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    Science.gov (United States)

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  16. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Science.gov (United States)

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  17. Emotional modulation of attention affects time perception: evidence from event-related potentials.

    Science.gov (United States)

    Tamm, Maria; Uusberg, Andero; Allik, Jüri; Kreegipuu, Kairi

    2014-06-01

    Emotional effects on human time perception are generally attributed to arousal speeding up or slowing down the internal clock. The aim of the present study is to investigate the less frequently considered role of attention as an alternative mediator of these effects with the help of event-related potentials (ERPs). Participants produced short intervals (0.9, 1.5, 2.7, and 3.3s) while viewing high arousal images with pleasant and unpleasant contents in comparison to neutral images. Behavioral results revealed that durations were overproduced for the 0.9s interval whereas, for 2.7 and 3.3s intervals, underproduction was observed. The effect of affective valence was present for the shorter durations and decreased as the target intervals became longer. More specifically, the durations for unpleasant images were less overproduced in the 0.9s intervals, and for the 1.5s trials, durations for unpleasant images were slightly underproduced, compared to pleasant images, which were overproduced. The analysis of different ERP components suggests possible attention processes related to the timing of affective images in addition to changes in pacemaker speed. Early Posterior Negativity (EPN) was larger for positive than for negative images, indicating valence-specific differences in activation of early attention mechanisms. Within the early P1 and the Late Positive Potential (LPP) components, both pleasant and unpleasant stimuli exhibited equal affective modulation. Contingent Negative Variation (CNV) remained independent of both timing performance and affective modulation. This pattern suggests that both pleasant and unpleasant stimuli enhanced arousal and captured attention, but the latter effect was more pronounced for pleasant stimuli. The valence-specificity of affective attention revealed by ERPs combined with behavioral timing results suggests that attention processes indeed contribute to emotion-induced temporal distortions, especially for longer target intervals

  18. Perception of Complex Auditory Scenes

    Science.gov (United States)

    2014-07-02

    facility is a 4.3-m diameter geodesic sphere housed in an anechoic chamber. 277 Bose 11-cm full –range loudspeakers are mounted on the surface of the...conduction to loudness judgments, hearing damage risk criteria, and auditory localization. The purpose of this line of research was to develop and

  19. Effects of temporal trial-by-trial cuing on early and late stages of auditory processing: evidence from event-related potentials.

    Science.gov (United States)

    Lampar, Alexa; Lange, Kathrin

    2011-08-01

    Temporal-cuing studies show faster responding to stimuli at an attended versus unattended time point. Whether the mechanisms involved in this temporal orienting of attention are located early or late in the processing stream has not been answered unequivocally. To address this question, we measured event-related potentials in two versions of an auditory temporal cuing task: Stimuli at the uncued time point either required a response (Experiment 1) or did not (Experiment 2). In both tasks, attention was oriented to the cued time point, but attention could be selectively focused on the cued time point only in Experiment 2. In both experiments, temporal orienting was associated with a late positivity in the timerange of the P3. An early enhancement in the timerange of the auditory N1 was observed only in Experiment 2. Thus, temporal attention improves auditory processing at early sensory levels only when it can be focused selectively.

  20. Are auditory percepts determined by experience?

    Science.gov (United States)

    Monson, Brian B; Han, Shui'Er; Purves, Dale

    2013-01-01

    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  1. Are auditory percepts determined by experience?

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    Full Text Available Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  2. Phonetic categorization in auditory word perception.

    Science.gov (United States)

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  3. Early Perception of Written Syllables in French: An Event-Related Potential Study

    Science.gov (United States)

    Doignon-Camus, Nadege; Bonnefond, Anne; Touzalin-Chretien, Pascale; Dufour, Andre

    2009-01-01

    The present study examined whether written syllable units are perceived in first steps of letter string processing. An illusory conjunction experiment was conducted while event-related potentials were recorded. Colored pseudowords were presented such that there was a match or mismatch between the syllable boundaries and the color boundaries. The…

  4. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    Science.gov (United States)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  5. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    Science.gov (United States)

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  6. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    Science.gov (United States)

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  7. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    Science.gov (United States)

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  8. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    Science.gov (United States)

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  9. Auditory Perception of Complex Sounds.

    Science.gov (United States)

    1987-10-30

    twice the length of "short" (1). In such series we can exemplify rhythms that have both equally and unequally spaced accents. Specifically, we were...C.B., Kendall, R.A., & Carterette, E.C. (1987). "The effect of melodic and rhythmic contour on recognition memory for pitch change," Perception...34Parallels in rhythm and melody." In W.J. Dowling & T.J. Tighe (Eds.), The understanding of melody and rhythm . Potomac, MD: Erlbaum. Monahan, C.B

  10. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials

    OpenAIRE

    2014-01-01

    The extended phenotype of autism spectrum disorders (ASD) includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs) and magnetic fields (ERFs) may help to reveal which processing stages underlying brain response to unattended but salient...

  11. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials

    OpenAIRE

    2014-01-01

    The extended phenotype of autism spectrum disorders (ASD) includes a combination of arousal regulation problems, sensory modulation difficulties, and attention re-orienting deficit. A slow and inefficient re-orienting to stimuli that appear outside of the attended sensory stream is thought to be especially detrimental for social functioning. Event-related potentials (ERPs) and magnetic fields (ERFs) may help to reveal which processing stages underlying brain response to unattended but salien...

  12. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  13. P3 event-related potentials and performance of young and old subjects for music perception tasks.

    Science.gov (United States)

    Swartz, K P; Walton, J P; Hantz, E C; Goldhammer, E; Crummer, G C; Frisina, R D

    1994-10-01

    Event-related potentials and performance data were recorded from young and old subjects performing six tasks involving auditory discrimination of musical stimuli. Tasks included pure tone, timbre, rhythm, and interval discrimination, detection of a meter shift, and discrimination of open and closed harmonic endings for chord progressions. P3 latencies were generally longer for the old subjects. P3 amplitude and performance differences between subject groups were not significant. Our results provide a quantitative probe of the neural and behavioral significance of the influence of aging and stimulus complexity on the processing of some of the elementary constituents of music. In particular, pure tone and timbre discrimination appear to correspond to behaviorally and neurally simpler processing than does discrimination of the other musical constituents tested in our study.

  14. Auditory Spatial Perception without Vision

    Science.gov (United States)

    Voss, Patrice

    2016-01-01

    Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits. PMID:28066286

  15. Early Top-Down Influences on Bistable Perception Revealed by Event-Related Potentials

    Science.gov (United States)

    Pitts, Michael A.; Gavin, William J.; Nerger, Janice L.

    2008-01-01

    A longstanding debate exists in the literature concerning bottom-up vs. top-down influences on bistable perception. Recently, a technique has been developed to measure early changes in brain activity (via ERPs) related to perceptual reversals (Kornmeier & Bach, 2004). An ERP component, the reversal negativity (RN) has been identified, and is…

  16. Using Event-Related Brain Potentials to Assess Perceptibility: The Case of French Speakers and English [h

    Science.gov (United States)

    Mah, Jennifer; Goad, Heather; Steinhauer, Karsten

    2016-01-01

    French speaking learners of English encounter persistent difficulty acquiring English [h], thus confusing words like eat and heat in both production and perception. We assess the hypothesis that the acoustic properties of [h] may render detection of this segment in the speech stream insufficiently reliable for second language acquisition. We use the mismatch negativity (MMN) in event-related potentials to investigate [h] perception in French speaking learners of English and native English controls, comparing both linguistic and non-linguistic conditions in an unattended oddball paradigm. Unlike native speakers, French learners of English elicit an MMN response only in the non-linguistic condition. Our results provide neurobiological evidence against the hypothesis that French speakers’ difficulties with [h] are acoustically based. They instead suggest that the problem is in constructing an appropriate phonological representation for [h] in the interlanguage grammar. PMID:27757086

  17. Tracking the processes behind conscious perception: a review of event-related potential correlates of visual consciousness.

    Science.gov (United States)

    Railo, Henry; Koivisto, Mika; Revonsuo, Antti

    2011-09-01

    Event-related potential (ERP) studies have attempted to discover the processes that underlie conscious visual perception by contrasting ERPs produced by stimuli that are consciously perceived with those that are not. Variability of the proposed ERP correlates of consciousness is considerable: the earliest proposed ERP correlate of consciousness (P1) coincides with sensory processes and the last one (P3) marks postperceptual processes. A negative difference wave called visual awareness negativity (VAN), typically observed around 200 ms after stimulus onset in occipitotemporal sites, gains strong support for reflecting the processes that correlate with, and possibly enable, aware visual perception. Research suggests that the early parts of conscious processing can proceed independently of top-down attention, although top-down attention may modulate visual processing even before consciousness. Evidence implies that the contents of consciousness are provided by interactions in the ventral stream, but indispensable contributions from dorsal regions influence already low-level visual responses.

  18. Using event-related brain potentials to assess perceptibility: The case of French speakers and English [h

    Directory of Open Access Journals (Sweden)

    Jennifer Mah

    2016-10-01

    Full Text Available French speaking learners of English encounter persistent difficulty acquiring English [h], thus confusing words like eat and heat in both production and perception. We assess the hypothesis that the acoustic properties of [h] may render detection of this segment in the speech stream insufficiently reliable for second language acquisition. We use the mismatch negativity (MMN in event-related potentials to investigate [h] perception in French speaking learners of English and native English controls, comparing both linguistic and non-linguistic conditions in an unattended oddball paradigm. Unlike native speakers, French learners of English elicit an MMN response only in the non-linguistic condition. Our results provide neurobiological evidence against the hypothesis that French speakers’ difficulties with [h] are acoustically based. They instead suggest that the problem is in constructing an appropriate phonological representation for [h] in the interlanguage grammar.

  19. Using Event-Related Brain Potentials to Assess Perceptibility: The Case of French Speakers and English [h].

    Science.gov (United States)

    Mah, Jennifer; Goad, Heather; Steinhauer, Karsten

    2016-01-01

    French speaking learners of English encounter persistent difficulty acquiring English [h], thus confusing words like eat and heat in both production and perception. We assess the hypothesis that the acoustic properties of [h] may render detection of this segment in the speech stream insufficiently reliable for second language acquisition. We use the mismatch negativity (MMN) in event-related potentials to investigate [h] perception in French speaking learners of English and native English controls, comparing both linguistic and non-linguistic conditions in an unattended oddball paradigm. Unlike native speakers, French learners of English elicit an MMN response only in the non-linguistic condition. Our results provide neurobiological evidence against the hypothesis that French speakers' difficulties with [h] are acoustically based. They instead suggest that the problem is in constructing an appropriate phonological representation for [h] in the interlanguage grammar.

  20. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Tongran Liu

    Full Text Available The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8 and happy expressions were deviant stimuli (p = 0.2, and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8 and fearful expressions were deviant stimuli (p = 0.2. Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs were obtained during the tasks. The visual mismatch negativity (vMMN components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms, the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms, the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  1. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Science.gov (United States)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  2. Adult age differences in visual search from perception to response: Evidence from event-related potentials

    DEFF Research Database (Denmark)

    Wiegand, Iris

    Attentional changes play a major role in age-related behavioral slowing, however, the specific aspects of attention that contribute to this decrement are not clearly defined. To these aims, we combined response times with lateralized ERPs of younger and older adults during a visual compound search...... at multiple stages from perception to response. Furthermore, we explored the implicit influence of recently encountered information in terms of intertrial effects. ERPs could disentangle that, while automatic processes of perceptual-dimension priming and response priming across trials were preserved, older...

  3. Alterations in attention capture to auditory emotional stimuli in job burnout: an event-related potential study.

    Science.gov (United States)

    Sokka, Laura; Huotilainen, Minna; Leinikka, Marianne; Korpela, Jussi; Henelius, Andreas; Alain, Claude; Müller, Kiti; Pakarinen, Satu

    2014-12-01

    Job burnout is a significant cause of work absenteeism. Evidence from behavioral studies and patient reports suggests that job burnout is associated with impairments of attention and decreased working capacity, and it has overlapping elements with depression, anxiety and sleep disturbances. Here, we examined the electrophysiological correlates of automatic sound change detection and involuntary attention allocation in job burnout using scalp recordings of event-related potentials (ERP). Volunteers with job burnout symptoms but without severe depression and anxiety disorders and their non-burnout controls were presented with natural speech sound stimuli (standard and nine deviants), as well as three rarely occurring speech sounds with strong emotional prosody. All stimuli elicited mismatch negativity (MMN) responses that were comparable in both groups. The groups differed with respect to the P3a, an ERP component reflecting involuntary shift of attention: job burnout group showed a shorter P3a latency in response to the emotionally negative stimulus, and a longer latency in response to the positive stimulus. Results indicate that in job burnout, automatic speech sound discrimination is intact, but there is an attention capture tendency that is faster for negative, and slower to positive information compared to that of controls.

  4. Personality and Augmenting/Reducing (A/R) in auditory event-related potentials (ERPs) during emotional visual stimulation

    Science.gov (United States)

    De Pascalis, Vilfredo; Fracasso, Francesca; Corr, Philip J.

    2017-01-01

    An auditory augmenting/reducing ERP paradigm recorded for 5 intensity tones with emotional visual stimulation was used, for the first time, to test predictions derived from the revised Reinforcement Sensitivity Theory (rRST) of personality with respect to two major factors: behavioral inhibition system (BIS), fight/flight/freeze system (FFFS). Higher BIS and FFFS scores were negatively correlated with N1/P2 slopes at central sites (C3, Cz, C4). Conditional process analysis revealed that the BIS was a mediator of the association between the N1/P2 slope and the FFFS scores. An analysis of covariance showed that lower BIS scorers exhibited larger N1/P2 amplitudes across all tone intensities while watching negative, positive and neutral pictures. Additionally, lower FFFS scorers compared to higher FFFS scorers disclosed larger N1/P2 amplitudes to the highest tone intensities and these differences were even more pronounced while watching positive emotional pictures. Findings were explained assuming the operation of two different, but related processes: transmarginal inhibition for the BIS; the attention/emotional gating mechanism regulating cortical sensory input for the FFFS trait. These findings appear consistent with predictions derived from the rRST, which traced fear and anxiety to separate but interacting neurobehavioural systems. PMID:28164996

  5. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  6. Hyperarticulation of vowels enhances phonetic change responses in both native and non-native speakers of English: evidence from an auditory event-related potential study.

    Science.gov (United States)

    Uther, Maria; Giannakopoulou, Anastasia; Iverson, Paul

    2012-08-27

    The finding that hyperarticulation of vowel sounds occurs in certain speech registers (e.g., infant- and foreigner-directed speech) suggests that hyperarticulation may have a didactic function in facilitating acquisition of new phonetic categories in language learners. This event-related potential study tested whether hyperarticulation of vowels elicits larger phonetic change responses, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP) and tested native and non-native speakers of English. Data from 11 native English-speaking and 10 native Greek-speaking participants showed that Greek speakers in general had smaller MMNs compared to English speakers, confirming previous studies demonstrating sensitivity of the MMN to language background. In terms of the effect of hyperarticulation, hyperarticulated stimuli elicited larger MMNs for both language groups, suggesting vowel space expansion does elicit larger pre-attentive phonetic change responses. Interestingly Greek native speakers showed some P3a activity that was not present in the English native speakers, raising the possibility that additional attentional switch mechanisms are activated in non-native speakers compared to native speakers. These results give general support for models of speech learning such as Kuhl's Native Language Magnet enhanced (NLM-e) theory. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  7. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    Science.gov (United States)

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-07-19

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. A Brief Introduction to the Use of Event-Related Potentials (ERPs) in Studies of Perception and Attention

    Science.gov (United States)

    Woodman, Geoffrey F.

    2013-01-01

    Due to the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial of the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition that are unobservable with behavioral methods alone. I then review a number of important methodological issues and often forgotten facts that should be considered when evaluating or planning ERP experiments. PMID:21097848

  9. Adapted wavelet transform improves time-frequency representations: a study of auditory elicited P300-like event-related potentials in rats

    Science.gov (United States)

    Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.

    2017-04-01

    Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.

  10. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  11. Perceptual Load Influences Auditory Space Perception in the Ventriloquist Aftereffect

    Science.gov (United States)

    Eramudugolla, Ranmalee; Kamke, Marc. R.; Soto-Faraco, Salvador; Mattingley, Jason B.

    2011-01-01

    A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the "ventriloquist aftereffect", reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their…

  12. Effects of prior stimulus and prior perception on neural correlates of auditory stream segregation.

    Science.gov (United States)

    Snyder, Joel S; Holder, W Trent; Weintraub, David M; Carter, Olivia L; Alain, Claude

    2009-11-01

    We examined whether effects of prior experience are mediated by distinct brain processes from those processing current stimulus features. We recorded event-related potentials (ERPs) during an auditory stream segregation task that presented an adaptation sequence with a small, intermediate, or large frequency separation between low and high tones (Deltaf), followed by a test sequence with intermediate Deltaf. Perception of two streams during the test was facilitated by small prior Deltaf and by prior perception of two streams and was accompanied by more positive ERPs. The scalp topography of these perception-related changes in ERPs was different from that observed for ERP modulations due to increasing the current Deltaf. These results reveal complex interactions between stimulus-driven activity and temporal-context-based processes and suggest a complex set of brain areas involved in modulating perception based on current and previous experience.

  13. Hippocampal P3-like auditory event-related potentials are disrupted in a rat model of cholinergic degeneration in Alzheimer's disease: reversal by donepezil treatment.

    Science.gov (United States)

    Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe; Bastlund, Jesper Frank

    2014-01-01

    P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related to several psychiatric and neurological diseases, e.g., Alzheimer's disease (AD). However, only a very limited number of rodent studies have addressed the back-translational validity of the P3-like ERPs as suitable markers of cognition. Thus, the potential of rodent P3-like ERPs to predict pro-cognitive effects in humans remains to be fully validated. The current study characterizes P3-like ERPs in the 192-IgG-SAP (SAP) rat model of the cholinergic degeneration associated with AD. Following training in a combined auditory oddball and lever-press setup, rats were subjected to bilateral intracerebroventricular infusion of 1.25 μg SAP or PBS (sham lesion) and recording electrodes were implanted in hippocampal CA1. Relative to sham-lesioned rats, SAP-lesioned rats had significantly reduced amplitude of P3-like ERPs. P3 amplitude was significantly increased in SAP-treated rats following pre-treatment with 1 mg/kg donepezil. Infusion of SAP reduced the hippocampal choline acetyltransferase activity by 75%. Behaviorally defined cognitive performance was comparable between treatment groups. The present study suggests that AD-like deficits in P3-like ERPs may be mimicked by the basal forebrain cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup.

  14. Effects of inter-stimulus interval (ISI) duration on the N1 and P2 components of the auditory event-related potential.

    Science.gov (United States)

    Pereira, Diana R; Cardoso, Susana; Ferreira-Santos, Fernando; Fernandes, Carina; Cunha-Reis, Cassilda; Paiva, Tiago O; Almeida, Pedro R; Silveira, Celeste; Barbosa, Fernando; Marques-Teixeira, João

    2014-12-01

    The N1 and P2 components of the event-related potential are relevant markers in the processing of auditory information, indicating the presence of several acoustic phenomena, such as pure tones or speech sounds. In addition, the expression of these components seems to be sensitive to diverse experimental variations. The main purpose of the present investigation was to explore the role of inter-stimulus interval (ISI) on the N1 and P2 responses, considering two widely used experimental paradigms: a single tone task (1000 Hz sound repeated in a fixed rhythm) and an auditory oddball (80% of the stimuli were equal to the sound used in the single tone and the remaining were a 1500 Hz tone). Both tasks had four different conditions, and each one tested a fixed value of ISI (600, 1000, 3000, or 6000 ms). A sample of 22 participants performed these tasks, while an EEG was recorded, in order to examine the maximum amplitude of the N1 and P2 components. Analysis of the stimuli in the single tone task and the frequent tones in the oddball task revealed a similar outcome for both tasks and for both components: N1 and P2 amplitudes were enhanced in conditions with longer ISIs regardless of task. This response pattern emphasizes the dependence of both the N1 and P2 components on the ISI, especially in a scenario of repetitive and regular stimulation. The absence of task effects suggests that the ISI effect reported may depend on refractory mechanisms rather than being due to habituation effects.

  15. Simultanagnosia does not affect processes of auditory Gestalt perception.

    Science.gov (United States)

    Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto

    2017-05-01

    Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Auditory preattentive processing of Thai vowel change perception in consonant-vowel (CV syllables

    Directory of Open Access Journals (Sweden)

    Naiphinich Kotchabhakdi

    2004-11-01

    Full Text Available Event-related potential (ERP responses to infrequently presented spoken deviant syllables /pi/ among repetitive standard /pc/ syllables were recorded in Thai subjects who ignored these stimuli while reading books of their choices. The vowel across-category changes elicited a change-specific mismatch negativity response (MMN. The across-category change perception of vowels in consonant-vowel (CV syllables was also assessed using low- resolution electromagnetic tomography (LORETA. The LORETA-MMN generator appeared in the left auditory cortex, emphasizing the role of the left hemisphere in speech processing already at a preattentive processing level also in CV-syllables.

  17. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  18. Cholinergic modulation of auditory P3 event-related potentials as indexed by CHRNA4 and CHRNA7 genotype variation in healthy volunteers.

    Science.gov (United States)

    Hyde, Molly; Choueiry, Joëlle; Smith, Dylan; de la Salle, Sara; Nelson, Renee; Impey, Danielle; Baddeley, Ashley; Aidelbaum, Robert; Millar, Anne; Knott, Verner

    2016-06-03

    Schizophrenia (SZ) is a psychiatric disorder characterized by cognitive dysfunction within the realm of attentional processing. Reduced P3a and P3b event-related potentials (ERPs), indexing involuntary and voluntary attentional processing respectively, have been consistently observed in SZ patients who also express prominent cholinergic deficiencies. The involvement of the brain's cholinergic system in attention has been examined for several decades; however, further inquiry is required to further comprehend how abnormalities in this system affect neighbouring neurotransmitter systems and contribute to neurocognitive deficits. The objective of this pilot study was to examine the moderating role of the CHRNA4 (rs1044396), CHRNA7 (rs3087454), and SLC5A7 (rs1013940) genes on ERP indices of attentional processing in healthy volunteers (N=99; Caucasians and non-Caucasians) stratified by genotype and assessed using the auditory P300 "oddball" paradigm. Results indicated significantly greater P3a and P3b-indexed attentional processing for CT (vs. CC) CHRNA4 carriers and greater P3b for AA (vs. CC) CHRNA7 carriers. SLC5A7 allelic variants did not show significant differences in P3a and P3b processing. These findings expand our knowledge on the moderating effect of cholinergic genes on attention and could help inform targeted drug developments aimed at restoring attention deficits in SZ patients.

  19. Auditory and visual event-related potentials and flash visual evoked potentials in Alzheimer's disease: correlations with Mini-Mental State Examination and Raven's Coloured Progressive Matrices.

    Science.gov (United States)

    Tanaka, F; Kachi, T; Yamada, T; Sobue, G

    1998-01-01

    We investigated possible correlations among neurophysiological examinations [auditory and visual event-related potentials (A-ERPs, V-ERPs), and flash visual evoked potentials (F-VEPs)] and neuropsychological tests [Mini-Mental State Examination (MMSE) and Raven's Coloured Progressive Matrices (RCPM)] in 15 subjects with probable or possible Alzheimer's disease (AD) according to the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) criteria. The P300 latency of A-ERPs was correlated with the scores of MMSE but not with those of RCPM. The P300 latency of V-ERPs was more significantly correlated with the scores of RCPM than with those of MMSE. The P2 latency of F-VEPs was more significantly correlated with the scores of RCPM than with those of MMSE. The P2 latency of F-VEPs was not correlated with the P300 latency of A-ERPs but was correlated with the P300 latency of V-ERPs. The close relationship among V-ERPs, F-VEPs and RCPM suggests that these examinations at least partly reflect the functions of visual association areas in AD. Furthermore, discrepancy between P300 latency by A-ERPs and V-ERPs suggests that the mechanism responsible for P300 generation is not identical between these two stimulus modalities.

  20. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  1. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  2. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    Science.gov (United States)

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  3. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  4. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  5. Rodent Auditory Perception: Critical Band Limitations and Plasticity

    Science.gov (United States)

    King, Julia; Insanally, Michele; Jin, Menghan; Martins, Ana Raquel O.; D'amour, James A.; Froemke, Robert C.

    2015-01-01

    What do animals hear? While it remains challenging to adequately assess sensory perception in animal models, it is important to determine perceptual abilities in model systems to understand how physiological processes and plasticity relate to perception, learning, and cognition. Here we discuss hearing in rodents, reviewing previous and recent behavioral experiments querying acoustic perception in rats and mice, and examining the relation between behavioral data and electrophysiological recordings from the central auditory system. We focus on measurements of critical bands, which are psychoacoustic phenomena that seem to have a neural basis in the functional organization of the cochlea and the inferior colliculus. We then discuss how behavioral training, brain stimulation, and neuropathology impact auditory processing and perception. PMID:25827498

  6. Attentional modulation and domain-specificity underlying the neural organization of auditory categorical perception.

    Science.gov (United States)

    Bidelman, Gavin M; Walker, Breya S

    2017-03-01

    Categorical perception (CP) is highly evident in audition when listeners' perception of speech sounds abruptly shifts identity despite equidistant changes in stimulus acoustics. While CP is an inherent property of speech perception, how (if) it is expressed in other auditory modalities (e.g., music) is less clear. Moreover, prior neuroimaging studies have been equivocal on whether attentional engagement is necessary for the brain to categorically organize sound. To address these questions, we recorded neuroelectric brain responses [event-related potentials (ERPs)] from listeners as they rapidly categorized sounds along a speech and music continuum (active task) or during passive listening. Behaviorally, listeners' achieved sharper psychometric functions and faster identification for speech than musical stimuli, which was perceived in a continuous mode. Behavioral results coincided with stronger ERP differentiation between prototypical and ambiguous tokens (i.e., categorical processing) for speech but not for music. Neural correlates of CP were only observed when listeners actively attended to the auditory signal. These findings were corroborated by brain-behavior associations; changes in neural activity predicted more successful CP (psychometric slopes) for active but not passively evoked ERPs. Our results demonstrate auditory categorization is influenced by attention (active > passive) and is stronger for more familiar/overlearned stimulus domains (speech > music). In contrast to previous studies examining highly trained listeners (i.e., musicians), we infer that (i) CP skills are largely domain-specific and do not generalize to stimuli for which a listener has no immediate experience and (ii) categorical neural processing requires active engagement with the auditory stimulus.

  7. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  8. Brain dynamics that correlate with effects of learning on auditory distance perception.

    Science.gov (United States)

    Wisniewski, Matthew G; Mercado, Eduardo; Church, Barbara A; Gramann, Klaus; Makeig, Scott

    2014-01-01

    Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  9. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  10. Auditory and visual lexical neighborhoods in audiovisual speech perception.

    Science.gov (United States)

    Tye-Murray, Nancy; Sommers, Mitchell; Spehar, Brent

    2007-12-01

    Much evidence suggests that the mental lexicon is organized into auditory neighborhoods, with words that are phonologically similar belonging to the same neighborhood. In this investigation, we considered the existence of visual neighborhoods. When a receiver watches someone speak a word, a neighborhood of homophenes (ie, words that look alike on the face, such as pat and bat) is activated. The simultaneous activation of a word's auditory and visual neighborhoods may, in part, account for why individuals recognize speech better in an auditory-visual condition than what would be predicted by their performance in audition-only and vision-only conditions. A word test was administered to 3 groups of participants in audition-only, vision-only, and auditory-visual conditions, in the presence of 6-talker babble. Test words with sparse visual neighborhoods were recognized more accurately than words with dense neighborhoods in a vision-only condition. Densities of both the acoustic and visual neighborhoods as well as their intersection overlap were predictive of how well the test words were recognized in the auditory-visual condition. These results suggest that visual neighborhoods exist and that they affect auditory-visual speech perception. One implication is that in the presence of dual sensory impairment, the boundaries of both acoustic and visual neighborhoods may shift, adversely affecting speech recognition.

  11. Neural Correlates of Realistic and Unrealistic Auditory Space Perception

    Directory of Open Access Journals (Sweden)

    Akiko Callan

    2011-10-01

    Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.

  12. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  13. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  14. Perception of the Auditory-Visual Illusion in Speech Perception by Children with Phonological Disorders

    Science.gov (United States)

    Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis

    2008-01-01

    An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…

  15. The effects of speech motor preparation on auditory perception

    Science.gov (United States)

    Myers, John

    Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.

  16. Auditory and visual information in speech perception: A developmental perspective.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Fostick, Leah

    This study investigates the development of audiovisual speech perception from age 4 to 80, analysing the contribution of modality, context and special features of specific language being tested. Data of 77 participants in five age groups is presented in the study. Speech stimuli were introduced via auditory, visual and audiovisual modalities. Monosyllabic meaningful and nonsense words were included in a signal-to-noise ratio of 0 dB. Speech perception accuracy in audiovisual and auditory modalities by age resulted in an inverse U-shape, with lowest performance at ages 4-5 and 65-80. In the visual modality, a clear difference was shown between performance of children (ages 4-5 and 8-9) and adults (age 20 and above). The findings of the current study have important implications for strategic planning in rehabilitation programmes for child and adult speakers of different languages with hearing difficulties.

  17. Effects of Amplitude Compression on Relative Auditory Distance Perception

    Science.gov (United States)

    2013-10-01

    human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural ...auditory distance perception by reducing the level differences between sounds . The focus of the present study was to investigate the effect of amplitude...create stimuli. Two levels of amplitude compression were applied to the recordings through Adobe Audition sound editing software to simulate military

  18. Event related potentials to digit learning: Tracking neurophysiologic changes accompanying recall performanceModelling of auditory evoked potentials of human sleep-wake states

    NARCIS (Netherlands)

    Jongsma, M.L.A.; Gerrits, N.J.H.M.; Rijn, C.M. van; Quiroga, R.Q.; Maes, J.H.R.

    2012-01-01

    The aim of this study was to track recall performance and event-related potentials (ERPs) across multiple trials in a digit-learning task. When a sequence is practiced by repetition, the number of errors typically decreases and a learning curve emerges. Until now, almost all ERP learning and memory

  19. Auditory Perception in Open Field: Distance Estimation

    Science.gov (United States)

    2013-07-01

    the distance and spaciousness estimation cues can be ambiguous (e.g., sound intensity, sound spectrum) or are fairly weak (e.g., binaural cues...Invariance in Sound Perception. Acta Psychologica Fennica 1951, 1, 9–20. Florentine, M.; Epstein, M. Ecological Loudness. Binaural Loudness Constancy...8, 145–149. 50 Lu, Y-Ch.; Cooke, M. Binaural Estimation of Sound Source Distance via the Direct-to- Reverberant Energy Ratio for Static and Moving

  20. Is beat induction innate or learned? Probing emergent meter perception in adults and newborns using event-related brain potentials

    NARCIS (Netherlands)

    Honing, H.; Ladinig, O.; Háden, G.P.; Winkler, I.

    2009-01-01

    Meter is considered an important structuring mechanism in the perception and experience of rhythm in music. Combining behavioral and electrophysiological measures, in the present study we investigate whether meter is more likely a learned phenomenon, possibly a result of musical expertise, or

  1. Is beat induction innate or learned? Probing emergent meter perception in adults and newborns using event-related brain potentials.

    Science.gov (United States)

    Honing, Henkjan; Ladinig, Olivia; Háden, Gábor P; Winkler, István

    2009-07-01

    Meter is considered an important structuring mechanism in the perception and experience of rhythm in music. Combining behavioral and electrophysiological measures, in the present study we investigate whether meter is more likely a learned phenomenon, possibly a result of musical expertise, or whether sensitivity to meter is also active in adult nonmusicians and newborn infants. The results provide evidence that meter induction is active in adult nonmusicians and that beat induction is already functional right after birth.

  2. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  3. Distinctive Effect of Donepezil Treatment on P300 and N200 Subcomponents of Auditory Event-Related Evoked Potentials in Alzheimer Disease Patients

    OpenAIRE

    2015-01-01

    Background Latency of P300 subcomponent of event-related potentials (ERPs) increases in Alzheimer disease (AD) patients, which correlate well with cognitive impairment. Cholinesterase inhibitors (ChEIs) reduce P300 latency in AD patients with parallel improvement in cognition. It is not known whether N200 response to ChEIs is similar to that of P300. The aim of this study was to evaluate and compare characteristics of P300 and N200 in AD patients, treatment-naïve and on stable donepezil treat...

  4. Classification of Underwater Target Echoes Based on Auditory Perception Characteristics

    Institute of Scientific and Technical Information of China (English)

    Xiukun Li; Xiangxia Meng; Hang Liu; Mingye Liu

    2014-01-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  5. Coordinated roles of motivation and perception in the regulation of intergroup responses: frontal cortical asymmetry effects on the P2 event-related potential and behavior.

    Science.gov (United States)

    Amodio, David M

    2010-11-01

    Self-regulation is believed to involve changes in motivation and perception that function to promote goal-driven behavior. However, little is known about the way these processes interact during the on-line engagement of self-regulation. The present study examined the coordination of motivation, perception, and action control in White American participants as they regulated responses on a racial stereotyping task. Electroencephalographic indices of approach motivation (left frontal cortical asymmetry) and perceptual attention to Black versus White faces (the P2 event-related potential) were assessed during task performance. Action control was modeled from task behavior using the process-dissociation procedure. A pattern of moderated mediation emerged, such that stronger left frontal activity predicted larger P2 responses to race, which in turn predicted better action control, especially for participants holding positive racial attitudes. Results supported the hypothesis that motivation tunes perception to facilitate goal-directed action. Implications for theoretical models of intergroup response regulation, the P2 component, and the relation between motivation and perception are discussed.

  6. Modulation of auditory percepts by transcutaneous electrical stimulation.

    Science.gov (United States)

    Ueberfuhr, Margarete Anna; Braun, Amalia; Wiegrebe, Lutz; Grothe, Benedikt; Drexl, Markus

    2017-07-01

    Transcutaneous, electrical stimulation with electrodes placed on the mastoid processes represents a specific way to elicit vestibular reflexes in humans without active or passive subject movements, for which the term galvanic vestibular stimulation was coined. It has been suggested that galvanic vestibular stimulation mainly affects the vestibular periphery, but whether vestibular hair cells, vestibular afferents, or a combination of both are excited, is still a matter of debate. Galvanic vestibular stimulation has been in use since the late 18th century, but despite the long-known and well-documented effects on the vestibular system, reports of the effect of electrical stimulation on the adjacent cochlea or the ascending auditory pathway are surprisingly sparse. The present study examines the effect of transcutaneous, electrical stimulation of the human auditory periphery employing evoked and spontaneous otoacoustic emissions and several psychoacoustic measures. In particular, level growth functions of distortion product otoacoustic emissions were recorded during electrical stimulation with alternating currents (2 Hz, 1-4 mA in 1 mA-steps). In addition, the level and frequency of spontaneous otoacoustic emissions were followed before, during, and after electrical stimulation (2 Hz, 1-4 mA). To explore the effect of electrical stimulation on the retrocochlear level (i.e. on the ascending auditory pathway beyond the cochlea), psychoacoustic experiments were carried out. Specifically, participants indicated whether electrical stimulation (4 Hz, 2 and 3 mA) induced amplitude modulations of the perception of a pure tone, and of auditory illusions after presentation of either an intense, low-frequency sound (Bounce tinnitus) or a faint band-stop noise (Zwicker tone). These three psychoacoustic measures revealed significant perceived amplitude modulations during electrical stimulation in the majority of participants. However, no significant changes of evoked and

  7. Brain dynamics that correlate with effects of learning on auditory distance perception

    Directory of Open Access Journals (Sweden)

    Matthew G. Wisniewski

    2014-12-01

    Full Text Available Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m and far (30-m distances. Listeners’ accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC processes identified in electroencephalographic (EEG data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS. The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  8. The relationship of phonological ability, speech perception and auditory perception in adults with dyslexia.

    Directory of Open Access Journals (Sweden)

    Jeremy eLaw

    2014-07-01

    Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.

  9. Toward a neurobiology of auditory object perception: What can we learn from the songbird forebrain?

    Institute of Scientific and Technical Information of China (English)

    Kai LU; David S. VICARIO

    2011-01-01

    In the acoustic world,no sounds occur entirely in isolation; they always reach the ears in combination with other sounds.How any given sound is discriminated and perceived as an independent auditory object is a challenging question in neuroscience.Although our knowledge of neural processing in the auditory pathway has expanded over the years,no good theory exists to explain how perception of auditory objects is achieved.A growing body of evidence suggests that the selectivity of neurons in the auditory forebrain is under dynamic modulation,and this plasticity may contribute to auditory object perception.We propose that stimulus-specific adaptation in the auditory forebrain of the songbird (and perhaps in other systems) may play an important role in modulating sensitivity in a way that aids discrimination,and thus can potentially contribute to auditory object perception [Current Zoology 57 (6):671-683,2011].

  10. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...

  11. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion.

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  12. Vision of tongue movements bias auditory speech perception.

    Science.gov (United States)

    D'Ausilio, Alessandro; Bartoli, Eleonora; Maffongelli, Laura; Berry, Jeffrey James; Fadiga, Luciano

    2014-10-01

    Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Effects of acute oral Delta9-tetrahydrocannabinol and standardized cannabis extract on the auditory P300 event-related potential in healthy volunteers.

    Science.gov (United States)

    Roser, Patrik; Juckel, Georg; Rentzsch, Johannes; Nadulski, Thomas; Gallinat, Jürgen; Stadelmann, Andreas M

    2008-08-01

    Reduced amplitudes of auditory evoked P300 are a robust finding in schizophrenic patients, indicating deficient attentional resource allocation and active working memory. Delta9-Tetrahydrocannabinol (Delta9-THC), the main active constituent of Cannabis sativa, has been known to acutely impair cognitive abilities in several domains, particularly in memory and attention. Given the psychotic-like effects of Delta9-THC, a cannabinoid hypothesis of schizophrenia has been proposed. This prospective, double-blind, placebo-controlled cross-over study investigated the acute effects of cannabinoids on P300 amplitude in 20 healthy volunteers (age 28.2+/-3.1 years, 10 male) by comparing Delta9-THC and standardized cannabis extract containing Delta9-THC and cannabidiol (CBD). P300 waves were recorded during a choice reaction task. As expected, Delta9-THC revealed a significant reduction of P300 amplitude at midline frontal, central, and parietal electrodes. CBD has been known to abolish many of the psychotropic effects of Delta9-THC, but, unexpectedly, failed to demonstrate a reversal of Delta9-THC-induced P300 reduction. Moreover, there were no correlations between cannabinoid plasma concentrations and P300 parameters. These data suggest that Delta(9)-THC may lead to acute impairment of attentional functioning and working memory. It can be speculated whether the lack of effect of CBD may be due to an insufficient dose used or to an involvement of neurotransmitter systems in P300 generation which are not influenced by CBD.

  14. Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery.

    Science.gov (United States)

    Fujioka, Takako; Ross, Bernhard; Trainor, Laurel J

    2015-11-11

    Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory-motor coordination. With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event-related

  15. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  16. Association between a cannabinoid receptor gene (CNR1) polymorphism and cannabinoid-induced alterations of the auditory event-related P300 potential.

    Science.gov (United States)

    Stadelmann, Andreas M; Juckel, Georg; Arning, Larissa; Gallinat, Jürgen; Epplen, Jörg T; Roser, Patrik

    2011-05-27

    Numerous studies demonstrated a close relationship between cannabis abuse and schizophrenia with similar impairments in cognitive processing, particularly in P300 generation. Recently, an (AAT)n triplet repeat polymorphism within the cannabinoid receptor gene CNR1 has been found to be associated with both schizophrenia and substance dependence, and to modulate the P300 potential. As previously reported, both acute oral Δ(9)-tetrahydrocannabinol (Δ(9)-THC), the main psychoactive constituent of cannabis, and standardized cannabis extract containing Δ(9)-THC and cannabidiol (CBD) revealed a significant reduction of P300 amplitudes in healthy subjects but did not show any differences among each other. The aim of this study was to investigate whether the (AAT)n polymorphism differentially modulates the effects of Δ(9)-THC and cannabis extract on P300 generation in 20 healthy volunteers during an auditory choice reaction task. For the >10/>10 genotype, there was a significant decrease of P300 amplitude as well as a significant prolongation of P300 latency under pure Δ(9)-THC but not under cannabis extract. Moreover, we found a significant correlation between the number of AAT repeats and P300 variables for the Δ(9)-THC condition. Our data thus indicate that the CNR1 gene seems to be involved in the regulation of the P300 wave as a marker of selective attention and working memory. Moreover, it appears that variations within CNR1 may differentially alter the sensitivity to the acute effects of cannabinoids on P300 generation in healthy subjects.

  17. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception.

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J

    2016-02-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative computer-generated context spoken in a flat emotional tone was followed by a literally positive statement spoken in a sincere or sarcastic tone of voice. Participants indicated for each statement whether the intonation was sincere or sarcastic. In Experiment 1, a congruent context/tone of voice pairing (negative/sarcastic, positive/sincere) produced fast response times and proportions of sarcastic responses in the direction predicted by the tone of voice. Incongruent pairings produced mid-range proportions and slower response times. Experiment 2 introduced ambiguous contexts to determine whether a lower context/statements contrast would affect the proportion of sarcastic responses and response time. Results showed the expected findings for proportions (values between those obtained for congruent and incongruent pairings in the direction predicted by the tone of voice). However, response time failed to produce the predicted pattern, suggesting potential issues with the choice of stimuli. Experiments 3 and 4 extended the results of Experiments 1 and 2, respectively, to auditory stimuli based on written vignettes used in neuropsychological assessment. Results were exactly as predicted by contrast effects in both experiments. Taken together, the findings suggest that both context and tone influence how sarcasm is perceived while supporting the importance of contrast effects in sarcasm perception.

  18. Brown Norway rats, a putative schizophrenia model, show increased electroencephalographic activity at rest and decreased event-related potential amplitude, power, and coherence in the auditory sensory gating paradigm.

    Science.gov (United States)

    Tomimatsu, Yoshiro; Hibino, Ryosuke; Ohta, Hiroyuki

    2015-08-01

    In recent schizophrenia clinical research, electroencephalographic (EEG) oscillatory activities induced by a sensory stimulus or behavioral tasks have gained considerable interest as functional and pathophysiological biomarkers. The Brown Norway (BN) rat is a putative schizophrenia model that shows naturally low sensorimotor gating and deficits in cognitive performance, although other phenotypes have not been studied. The present study aimed to investigate the neurophysiological features of BN rats, particularly EEG/event-related potential (ERP). EEG activity was recorded at rest and during the auditory sensory gating paradigm under an awake, freely moving condition. Frequency and ERP analysis were performed along with time-frequency analysis of evoked power and intertrial coherence. Compared with Wistar-Kyoto rats, a well-documented control line, BN rats showed increased EEG power at rest, particularly in the theta and gamma ranges. In ERP analysis, BN rats showed reduced N40-P20 amplitude but normal sensory gating. The rats also showed reduced evoked power and intertrial coherence against auditory stimuli. These results suggest that BN rats show features of EEG/ERP measures clinically relevant to schizophrenia and may provide additional opportunities for translational research.

  19. Can auditory and visual speech perception be trained within a group setting?

    Science.gov (United States)

    Preminger, Jill E; Ziegler, Craig H

    2008-06-01

    This study attempted to determine whether auditory-only and auditory-visual speech perception could be trained in a group format. A randomized controlled trial with at least 16 participants per group was completed. A training-only group completed at least 5 hr of group speech perception training; a training plus psychosocial group completed at least 5 hr of group speech perception training and psychosocial exercises; and a control group did not receive training. Evaluations were conducted before and after training and included analytic and synthetic measures of speech perception, hearing loss-related and generic quality of life scales, and a class evaluation form. No significant group changes were measured on any of the analytic auditory-only or auditory-visual measures of speech perception, yet the majority of training participants (regardless of training group) reported improvement in auditory and auditory-visual speech perception. The training participants demonstrated a significant reduction on the emotional subscale of the hearing loss-related quality of life scale, while the control participants did not demonstrate a change on this subscale. Benefits of group audiologic rehabilitation classes may not result from an actual improvement in auditory or visual speech perception abilities, but participants still perceive training in these areas as useful.

  20. Relating binaural pitch perception to the individual listener's auditory profile

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception...... were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch...... sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural...

  1. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  2. Auditory feedback affects perception of effort when exercising with a Pulley machine

    DEFF Research Database (Denmark)

    Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco

    2013-01-01

    In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effort....... Results show that variations in frequency content affect the perception of effort....

  3. Visual activation and audiovisual interactions in the auditory cortex during speech perception: intracranial recordings in humans.

    Science.gov (United States)

    Besle, Julien; Fischer, Catherine; Bidet-Caulet, Aurélie; Lecaignard, Francoise; Bertrand, Olivier; Giard, Marie-Hélène

    2008-12-24

    Hemodynamic studies have shown that the auditory cortex can be activated by visual lip movements and is a site of interactions between auditory and visual speech processing. However, they provide no information about the chronology and mechanisms of these cross-modal processes. We recorded intracranial event-related potentials to auditory, visual, and bimodal speech syllables from depth electrodes implanted in the temporal lobe of 10 epileptic patients (altogether 932 contacts). We found that lip movements activate secondary auditory areas, very shortly (approximately equal to 10 ms) after the activation of the visual motion area MT/V5. After this putatively feedforward visual activation of the auditory cortex, audiovisual interactions took place in the secondary auditory cortex, from 30 ms after sound onset and before any activity in the polymodal areas. Audiovisual interactions in the auditory cortex, as estimated in a linear model, consisted both of a total suppression of the visual response to lipreading and a decrease of the auditory responses to the speech sound in the bimodal condition compared with unimodal conditions. These findings demonstrate that audiovisual speech integration does not respect the classical hierarchy from sensory-specific to associative cortical areas, but rather engages multiple cross-modal mechanisms at the first stages of nonprimary auditory cortex activation.

  4. Impact of Language on Development of Auditory-Visual Speech Perception

    Science.gov (United States)

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  5. A loudspeaker-based room auralization system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2009-01-01

    . This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception...

  6. 女性主观性唤起状态下的听觉事件相关电位研究%Auditory event related potential in female subjective sexual arousal state

    Institute of Scientific and Technical Information of China (English)

    甄宏丽; 胡佩诚; 陶林; 何胜昔

    2011-01-01

    Objective: To study the auditory event related potential (AERP) in female subjective sexual arousal state by listening sexual listening material. Methods: The sexual listening material was developed. Thirty females were enrolled and assessed with the multiple indicators of subjective sexual arousal when listening to the sexual auditory material. The AERP was noted and analyzed by Brain-Product BP-ERP workstation when these females were listening to pure-tone, relax music, and sexual auditory material. P3 latent period was statistically analyzed with single factor repetitive measure variance analysis and P3 amplitude was statistically analyzed with K Related-Samples test. Result: (1) The result of Multiple Indicators of Subjective Sexual Arousal showed that they had got low moderate subjective sexual arouse (P < 0. 001). (2) P3 amplitudes were significantly lower in sexual auditory material (P <0. 001) than in relax music, but the latencies between these two backgrounds had no significant differences [ (436. 8 ± 94.3) ms vs. (427. 3 ± 94.4) ms, P > 0. 05]. (3) In the Odd-ball model, the accuracy rate was the lowest under the sexual auditory material background (P < 0. 001). Conclusion: The sexual auditory material could lead female into low moderate subjective sexual arouse. P3 amplitude reduces significantly in subjective sexual arouse in female. Female pay much more attention on the autoscopia in subjective sexual arouse than in relax state.%目的:研究性感听觉材料引起的女性主观性唤起状态的听觉事件相关电位特征.方法:编制性感听觉材料;用主观性唤起多元评价指标调查广告招募的30名女性聆听性感听觉材料后的主观性唤起状态;应用德国Brain-Product,BP-ERP工作站,测量并分析25名女性由纯音、放松音乐及性感听觉材料引发的听觉事件相关电位;采用单个重复测量因素方差分析的方法对P3潜伏期进行分析,采用多个相关样本的

  7. Neuropharmacological modulation of the P3-like event-related potential in a rat two-tone auditory discrimination task with modafinil and NS9283, a positive allosteric modulator of α4β2 nAChRs.

    Science.gov (United States)

    Grupe, Morten; Grunnet, Morten; Laursen, Bettina; Bastlund, Jesper Frank

    2014-04-01

    The P300 (P3) event-related potential (ERP) is a neurophysiological signal believed to reflect cognitive processing of salient cues, and is thus used as a measure of attention and working memory. Additionally, P3 amplitude and latency is altered in neurological diseases and can be pharmacologically modulated. As P3-like ERPs can be recorded in rodents, it may serve as a potential translational biomarker of value for drug discovery. Here we investigated whether a positive allosteric modulator of α4β2 nicotinic acetylcholine receptors, NS9283, and the psychostimulant modafinil could modulate P3-like ERPs in healthy adult rats performing an auditory oddball discrimination task. ERPs were recorded with electroencephalography electrodes implanted into mediodorsal (MD) thalamus, medial prefrontal cortex, hippocampus and auditory cortex (AC). P3-like ERPs were detected in all brain regions, displaying larger amplitudes in target trials compared to non-target trials. Administration of modafinil (64 mg/kg) decreased P3-like ERP latency in MD thalamus and AC, whereas NS9283 augmented P3-like ERP amplitude in MD thalamus at 0.3 mg/kg and in AC at 3.0 mg/kg. Additionally, N1 pre-attention peak amplitude in MD thalamus was increased with 0.3 mg/kg NS9283. Neither of the compounds enhanced task performance. Rather, modafinil lowered correct rejections in non-target trials. In summary, our findings reveal pharmacological modulation of the rat P3-like ERP in cortical and subcortical regions by modafinil and NS9283. These findings encourage further exploration of the rat P3-like ERP in order to promote the understanding of its meaning within cognition, as well as its applicability as a translatable biomarker in drug development.

  8. Acute Anoxia on 40 Hz Auditory Event Related Potential in Guinea Pigs%急性缺氧对豚鼠40 Hz听觉相关电位的影响

    Institute of Scientific and Technical Information of China (English)

    王鸿南; 王希军; 宋江顺

    2001-01-01

    Objective To investigate the changes of 40 HZ auditory eventrelated potential(40 Hz AERP) in guinea pigs under acute anoxia.Methods Acute anoxia of different degree was imposed on guinea pigs and the changes of 40 Hz auditory event related potential (40 Hz AERP) wre observed. Results When suffered from mild anoxia, no obvious changes of 40 Hz AERPs were observed. If the anoxia was severe, the threshold of 40 Hz AERP was elevated, and the average of P1-4 was enlarged,the P1 latency was also prolonged Conclusion 40 Hzshowed its stability under mild anoxia,which might be due to the electrophysiologic of 40Hz AERR, and under severe anoxia it became obviously inhibited.%目的探讨急性缺氧条件下豚鼠40Hz听觉相关电位的改变。方法利用气管插管辅助呼吸并给予不同浓度低氧气体建立动物模型,观察在不同程度的急性缺氧条件下,豚鼠40Hz听觉相关电位的改变。结果轻度缺氧条件下40Hz听觉相关电位各项参数无明显改变,加重缺氧则其阈值升高,各波平均振幅降低,P1波潜伏期延长。结论40Hz听觉相关电位在轻度缺氧条件下比较稳定,这可能与40Hz听觉相关电位本身的特性有关,严重缺氧则表现为抑制。

  9. Growth hormone deficiency due to sports-related head trauma is associated with impaired cognitive performance in amateur boxers and kickboxers as revealed by P300 auditory event-related potentials.

    Science.gov (United States)

    Tanriverdi, Fatih; Suer, Cem; Yapislar, Hande; Kocyigit, Ismail; Selcuklu, Ahmet; Unluhizarci, Kursad; Casanueva, Felipe F; Kelestimur, Fahrettin

    2013-05-01

    It has been recently reported that boxing and kickboxing may cause pituitary dysfunction, GH deficiency in particular. The strong link between poor cognitive performance and GH deficiency due to causes other than head trauma and the improvement of cognitive function after GH replacement therapy have been previously shown. P300 auditory event-related potential (ERP) measure is widely used to evaluate cognitive performance. In this study, we investigated the relation between the GH-IGF-I axis and cognitive performance in boxers and kickboxers. Forty-one actively competing or retired male boxers (n: 27) and kickboxers (n: 14) with a mean age of 29·04 ± 9·30 year and 14 age- and education-matched healthy male controls were included in the study. For neuropsychological tests, the mini-mental state examination (MMSE) and Quality of Life Assessment of GH Deficiency in Adults (QoL-AGHDA) questionnaires were administered. Moreover, cognitive performance was evaluated according to P300 ERPs. Nine of 41 (21·9%) athletes had GH deficiency. P300 amplitudes were lower at all electrode sites in the GH-deficient group than in controls, and the differences were statistically significant at Fz and Oz electrode sites (P kickboxers. © 2012 Blackwell Publishing Ltd.

  10. A corticostriatal neural system enhances auditory perception through temporal context processing.

    Science.gov (United States)

    Geiser, Eveline; Notter, Michael; Gabrieli, John D E

    2012-05-02

    The temporal context of an acoustic signal can greatly influence its perception. The present study investigated the neural correlates underlying perceptual facilitation by regular temporal contexts in humans. Participants listened to temporally regular (periodic) or temporally irregular (nonperiodic) sequences of tones while performing an intensity discrimination task. Participants performed significantly better on intensity discrimination during periodic than nonperiodic tone sequences. There was greater activation in the putamen for periodic than nonperiodic sequences. Conversely, there was greater activation in bilateral primary and secondary auditory cortices (planum polare and planum temporale) for nonperiodic than periodic sequences. Across individuals, greater putamen activation correlated with lesser auditory cortical activation in both right and left hemispheres. These findings suggest that temporal regularity is detected in the putamen, and that such detection facilitates temporal-lobe cortical processing associated with superior auditory perception. Thus, this study reveals a corticostriatal system associated with contextual facilitation for auditory perception through temporal regularity processing.

  11. Comparison of Auditory Perception in Cochlear Implanted Children with and without Additional Disabilities

    Directory of Open Access Journals (Sweden)

    Seyed Basir Hashemi

    2016-05-01

    Full Text Available Background: The number of children with cochlear implants who have other difficulties such as attention deficiency and cerebral palsy has increased dramatically. Despite the need for information on the results of cochlear implantation in this group, the available literature is extremely limited. We, therefore, sought to compare the levels of auditory perception in children with cochlear implants with and without additional disabilities. Methods: A spondee test comprising 20 two-syllable words was performed. The data analysis was done using SPSS, version 19. Results: Thirty-one children who had received cochlear implants 2 years previously and were at an average age of 7.5 years were compared via the spondee test. From the 31 children,15 had one or more additional disabilities. The data analysis indicated that the mean score of auditory perception in this group was approximately 30 scores below that of the children with cochlear implants who had no additional disabilities. Conclusion: Although there was an improvement in the auditory perception of all the children with cochlear implants, there was a noticeable difference in the level of auditory perception between those with and without additional disabilities. Deafness and additional disabilities depended the children on lip reading alongside the auditory ways of communication. In addition, the level of auditory perception in the children with cochlear implants who had more than one additional disability was significantly less than that of the other children with cochlear implants who had one additional disability.

  12. The Perception of Cooperativeness Without Any Visual or Auditory Communication.

    Science.gov (United States)

    Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan

    2015-12-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.

  13. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  14. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  15. The relationship between auditory-visual speech perception and language-specific speech perception at the onset of reading instruction in English-speaking children.

    Science.gov (United States)

    Erdener, Doğu; Burnham, Denis

    2013-10-01

    Speech perception is auditory-visual, but relatively little is known about auditory-visual compared with auditory-only speech perception. One avenue for further understanding is via developmental studies. In a recent study, Sekiyama and Burnham (2008) found that English speakers significantly increase their use of visual speech information between 6 and 8 years of age but that this development does not appear to be universal across languages. Here, the possible bases for this language-specific increase among English speakers were investigated. Four groups of English-language children (5, 6, 7, and 8 years) and a group of adults were tested on auditory-visual, auditory-only, and visual-only speech perception; language-specific speech perception with native and non-native speech sounds; articulation; and reading. Results showed that language-specific speech perception and lip-reading ability reliably predicted auditory-visual speech perception in children but that adult auditory-visual speech perception was predicted by auditory-only speech perception. The implications are discussed in terms of both auditory-visual speech perception and language development. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.

    Science.gov (United States)

    Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R

    2010-03-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.

  17. Auditory event related potential character of attention in male sexual response%男性性反应中注意力的听觉事件相关电位特征

    Institute of Scientific and Technical Information of China (English)

    甄宏丽; 胡佩诚; 陶林; 何胜昔

    2011-01-01

    目的:通过听觉事件相关电位研究男性性反应的注意力变化.方法:对广告招募的30名成年男性,用主观性唤起多元评价指标测查其聆听性感听觉材料后的主观性反应状态;应用Brain-Product公司生产的BP-ERP工作站研究被试在放松和主观性反应状态中的听觉事件相关电位,分析大脑前额叶各电极点P3潜伏期与波幅特点,分析被试报告odd-ball范式中靶刺激次数的正确率.结果:主观性唤起多元评价指标显示被试听性感听觉材料后达到轻-中度主观性唤起水平;男性在主观性反应状态中多导联P3潜伏期长于放松状态(P<0.05),两者波幅的差异无统计学意义(P>0.05);主观性反应状态中被试报告odd-ball范式中靶刺激次数的正确率低于放松状态(P<0.05).结论:在主观性反应中男性可能仍对外界保持一定程度的有意注意.%Objective: To study the attention changing of male when they are in sexual response through the auditory event related potential (AERP) research. Methods: Thirty males were enrolled and assessed with the Multiple Indicators of Subjective Sexual Arousal when listening to the sexual auditory material. The AERP was noted and analyzed by Brain-Product BP-ERP workstation when they were in relax and sexual response state. The numbers of odd-ball target stimulation they had heard were noted. Result: The result of Multiple Indicators of Subjective Sexual Arousal showed that males were in low-moderate subjective sexual response state. P3 latencies were significantly longer in sexual auditory material [(447.49 ±72.79) ms vs. (417.18 ±53.12) ms, P 0.05). Males' accuracy rate of the number in the odd-ball target stimulation was lower in subjective sexual response state [(95. 3 ± 3.1) % vs.(91.4±2.6) %,P<0.05]. Conclusion: In subjective sexual response state, male's deliberate attention toward outside may be maintained to a certain extent

  18. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...... aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input...

  19. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Science.gov (United States)

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  20. The Phonotactic Influence on the Perception of a Consonant Cluster /pt/ by Native English and Native Polish Listeners: A Behavioral and Event Related Potential (ERP) Study

    Science.gov (United States)

    Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell

    2012-01-01

    The effect of exposure to the contextual features of the /pt/ cluster was investigated in native-English and native-Polish listeners using behavioral and event-related potential (ERP) methodology. Both groups experience the /pt/ cluster in their languages, but only the Polish group experiences the cluster in the context of word onset examined in…

  1. The Phonotactic Influence on the Perception of a Consonant Cluster /pt/ by Native English and Native Polish Listeners: A Behavioral and Event Related Potential (ERP) Study

    Science.gov (United States)

    Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell

    2012-01-01

    The effect of exposure to the contextual features of the /pt/ cluster was investigated in native-English and native-Polish listeners using behavioral and event-related potential (ERP) methodology. Both groups experience the /pt/ cluster in their languages, but only the Polish group experiences the cluster in the context of word onset examined in…

  2. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  3. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  4. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    Science.gov (United States)

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  5. The simultaneous perception of auditory-tactile stimuli in voluntary movement.

    Science.gov (United States)

    Hao, Qiao; Ogata, Taiki; Ogawa, Ken-Ichiro; Kwon, Jinhwan; Miyake, Yoshihiro

    2015-01-01

    The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli using a temporal order judgment task with voluntary movement, involuntary movement, and no movement. To eliminate the potential effect of stimulus predictability and the effect of spatial information associated with large-scale movement in the previous studies, we randomized the interval between the start of movement and the first stimulus, and used small-scale movement. As a result, the point of subjective simultaneity (PSS) during voluntary movement shifted from the tactile stimulus being first during involuntary movement or no movement to the auditory stimulus being first. The just noticeable difference (JND), an indicator of temporal resolution, did not differ across the three conditions. These results indicate that voluntary movement itself affects the PSS in auditory-tactile simultaneous perception, but it does not influence the JND. In the discussion of these results, we suggest that simultaneous perception may be affected by the efference copy.

  6. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  8. The effect of music on auditory perception in cochlear-implant users and normal-hearing listeners

    NARCIS (Netherlands)

    Fuller, Christina Diechina

    2016-01-01

    Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users

  9. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP hypothesis.

    Directory of Open Access Journals (Sweden)

    Aniruddh D. Patel

    2014-05-01

    Full Text Available Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement. More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This action simulation for auditory prediction (ASAP hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in nonhuman primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.

  10. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    Directory of Open Access Journals (Sweden)

    Eric Larson

    Full Text Available Human-machine interface (HMI designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis. Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group, and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel targets with an unambiguous cue.

  11. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  12. DEVELOPMENT OF AUDITORY PERCEPTION OF MUSICAL SOUNDS BY CHILDREN IN THE FIRST SIX GRADES.

    Science.gov (United States)

    PETZOLD, ROBERT G.

    THE AUDITORY PERCEPTION OF MUSICAL SOUNDS BY A SAMPLE OF 600 CHILDREN IN THE FIRST 6 GRADES WAS STUDIED. THREE TESTS WERE CONSTRUCTED FOR THIS STUDY. THEIR CONTENT WAS BASED UPON AN EXTENSIVE ANALYSIS OF TONAL AND RHYTHMIC CONFIGURATIONS FOUND IN THE SONGS CHILDREN SING. THE 45-ITEM AND ONE OF THE 20-ITEM TESTS WERE DESIGNED TO COLLECT DATA…

  13. General Auditory Processing, Speech Perception and Phonological Awareness Skills in Chinese-English Biliteracy

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.

    2013-01-01

    This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…

  14. Early Experience of Sex Hormones as a Predictor of Reading, Phonology, and Auditory Perception

    Science.gov (United States)

    Beech, John R.; Beauvois, Michael W.

    2006-01-01

    Previous research has indicated possible reciprocal connections between phonology and reading, and also connections between aspects of auditory perception and reading. The present study investigates these associations further by examining the potential influence of prenatal androgens using measures of digit ratio (the ratio of the lengths of the…

  15. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    Science.gov (United States)

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  16. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  17. Tracing the emergence of categorical speech perception in the human auditory system.

    Science.gov (United States)

    Bidelman, Gavin M; Moreno, Sylvain; Alain, Claude

    2013-10-01

    Speech perception requires the effortless mapping from smooth, seemingly continuous changes in sound features into discrete perceptual units, a conversion exemplified in the phenomenon of categorical perception. Explaining how/when the human brain performs this acoustic-phonetic transformation remains an elusive problem in current models and theories of speech perception. In previous attempts to decipher the neural basis of speech perception, it is often unclear whether the alleged brain correlates reflect an underlying percept or merely changes in neural activity that covary with parameters of the stimulus. Here, we recorded neuroelectric activity generated at both cortical and subcortical levels of the auditory pathway elicited by a speech vowel continuum whose percept varied categorically from /u/ to /a/. This integrative approach allows us to characterize how various auditory structures code, transform, and ultimately render the perception of speech material as well as dissociate brain responses reflecting changes in stimulus acoustics from those that index true internalized percepts. We find that activity from the brainstem mirrors properties of the speech waveform with remarkable fidelity, reflecting progressive changes in speech acoustics but not the discrete phonetic classes reported behaviorally. In comparison, patterns of late cortical evoked activity contain information reflecting distinct perceptual categories and predict the abstract phonetic speech boundaries heard by listeners. Our findings demonstrate a critical transformation in neural speech representations between brainstem and early auditory cortex analogous to an acoustic-phonetic mapping necessary to generate categorical speech percepts. Analytic modeling demonstrates that a simple nonlinearity accounts for the transformation between early (subcortical) brain activity and subsequent cortical/behavioral responses to speech (>150-200 ms) thereby describing a plausible mechanism by which the

  18. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody.

  19. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  20. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  1. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  2. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...

  3. Inconsistent Effect of Arousal on Early Auditory Perception.

    Science.gov (United States)

    Bolders, Anna C; Band, Guido P H; Stallen, Pieter Jan M

    2017-01-01

    Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.

  4. Auditory Perception in an Open Space: Detection and Recognition

    Science.gov (United States)

    2015-06-01

    line of sight provides attenuation of about 5 dB or more across most of the auditory frequency range, and higher barriers can attenuate sound by as much...distance estimation to, various sound sources spread across a large open field. This report presents results of the detection and recognition tasks. Both...acoustic (target sound and noise level) and meteorological (wind direction and strength, temperature, atmospheric pressure, humidity) data were

  5. The effect of psychological stress and expectation on auditory perception: A signal detection analysis.

    Science.gov (United States)

    Hoskin, Robert; Hunter, Mike D; Woodruff, Peter W R

    2014-11-01

    Both psychological stress and predictive signals relating to expected sensory input are believed to influence perception, an influence which, when disrupted, may contribute to the generation of auditory hallucinations. The effect of stress and semantic expectation on auditory perception was therefore examined in healthy participants using an auditory signal detection task requiring the detection of speech from within white noise. Trait anxiety was found to predict the extent to which stress influenced response bias, resulting in more anxious participants adopting a more liberal criterion, and therefore experiencing more false positives, when under stress. While semantic expectation was found to increase sensitivity, its presence also generated a shift in response bias towards reporting a signal, suggesting that the erroneous perception of speech became more likely. These findings provide a potential cognitive mechanism that may explain the impact of stress on hallucination-proneness, by suggesting that stress has the tendency to alter response bias in highly anxious individuals. These results also provide support for the idea that top-down processes such as those relating to semantic expectation may contribute to the generation of auditory hallucinations.

  6. Speech-specific categorical perception deficit in autism: An Event-Related Potential study of lexical tone processing in Mandarin-speaking children

    Science.gov (United States)

    Wang, Xiaoyue; Wang, Suiping; Fan, Yuebo; Huang, Dan; Zhang, Yang

    2017-01-01

    Recent studies reveal that tonal language speakers with autism have enhanced neural sensitivity to pitch changes in nonspeech stimuli but not to lexical tone contrasts in their native language. The present ERP study investigated whether the distinct pitch processing pattern for speech and nonspeech stimuli in autism was due to a speech-specific deficit in categorical perception of lexical tones. A passive oddball paradigm was adopted to examine two groups (16 in the autism group and 15 in the control group) of Chinese children’s Mismatch Responses (MMRs) to equivalent pitch deviations representing within-category and between-category differences in speech and nonspeech contexts. To further examine group-level differences in the MMRs to categorical perception of speech/nonspeech stimuli or lack thereof, neural oscillatory activities at the single trial level were further calculated with the inter-trial phase coherence (ITPC) measure for the theta and beta frequency bands. The MMR and ITPC data from the children with autism showed evidence for lack of categorical perception in the lexical tone condition. In view of the important role of lexical tones in acquiring a tonal language, the results point to the necessity of early intervention for the individuals with autism who show such a speech-specific categorical perception deficit. PMID:28225070

  7. Auditory Perception and Production of Speech Feature Contrasts by Pediatric Implant Users.

    Science.gov (United States)

    Mahshie, James; Core, Cynthia; Larsen, Michael D

    2015-01-01

    The aim of the present research is to examine the relations between auditory perception and production of specific speech contrasts by children with cochlear implants (CIs) who received their implants before 3 years of age and to examine the hierarchy of abilities for perception and production for consonant and vowel features. The following features were examined: vowel height, vowel place, consonant place of articulation (front and back), continuance, and consonant voicing. Fifteen children (mean age = 4;0 and range 3;2 to 5;11) with a minimum of 18 months of experience with their implants and no additional known disabilities served as participants. Perception of feature contrasts was assessed using a modification of the Online Imitative Speech Pattern Contrast test, which uses imitation to assess speech feature perception. Production was examined by having the children name a series of pictures containing consonant and vowel segments that reflected contrasts of each feature. For five of the six feature contrasts, production accuracy was higher than perception accuracy. There was also a significant and positive correlation between accuracy of production and auditory perception for each consonant feature. This correlation was not found for vowels, owing largely to the overall high perception and production scores attained on the vowel features. The children perceived vowel feature contrasts more accurately than consonant feature contrasts. On average, the children had lower perception scores for Back Place and Continuance feature contrasts than for Anterior Place and Voicing contrasts. For all features, the median production scores were 100%; the majority of the children were able to accurately and consistently produce the feature contrasts. The mean production scores for features reflect greater score variability for consonant feature production than for vowel features. Back Place of articulation for back consonants and Continuance contrasts appeared to be the

  8. [The comparative analysis of changes of short pieces of EEG at perception of music on the basis of the event-related synchronization/desynchronization and wavelet-synchrony].

    Science.gov (United States)

    Oknina, L B; Kuptsova, S V; Romanov, A S; Masherov, E L; Kuznetsova, O A; Sharova, E V

    2012-01-01

    The going of present pilot study is an analysis of features changes of EEG short pieces registered from 32 sites, at perception of musical melodies healthy examinees depending on logic (cognizance) and emotional (it was pleasant it was not pleasant) melody estimations. For this purpose changes of event-related synchronization/desynchronization, and also wavelet-synchrony of EEG-responses at 31 healthy examinees at the age from 18 till 60 years were compared. It is shown that at a logic estimation of music the melody cognizance is accompanied the event-related desynchronization in the left fronto-parietal-temporal area. At an emotional estimation of a melody the event-related synchronization in left fronto - temporal area for the pleasant melodies, desynchronization in temporal area for not pleasant and desynchronization in occipital area for the melodies which are not causing the emotional response is typical. At the analysis of wavelet-synchrony of EEG characterizing jet changes of interaction of cortical zones, it is revealed that the most distinct topographical distinctions concern type of processing of the heard music: logic (has learned-hasn't learned) or emotional (it was pleasant-it was not pleasant). If at an emotional estimation changes interhemispheric communications between associative cortical zones (central, frontal, temporal), are more expressed at logic - between inter - and intrahemispheric communications of projective zones of the acoustic analyzer (temporal area). It is supposed that the revealed event-related synchronization/desynhronization reflects, most likely, an activation component of an estimation of musical fragments whereas the wavelet-analysis provides guidance on character of processing of musical stimulus.

  9. Neural correlates of face and object perception in an awake chimpanzee (Pan troglodytes examined by scalp-surface event-related potentials.

    Directory of Open Access Journals (Sweden)

    Hirokata Fukushima

    Full Text Available BACKGROUND: The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. METHODOLOGY/PRINCIPAL FINDINGS: In the present report, skin-surface event-related brain potentials (ERPs were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150-200 ms in either experiment. CONCLUSIONS/SIGNIFICANCE: Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species.

  10. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  11. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  12. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  13. Maintaining realism in auditory length-perception experiments

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....

  14. Perceptual constancy in auditory perception of distance to railway tracks.

    Science.gov (United States)

    De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L

    2013-07-01

    Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.

  15. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

    Science.gov (United States)

    Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520

  16. Neuromodulatory Effects of Auditory Training and Hearing Aid Use on Audiovisual Speech Perception in Elderly Individuals

    Science.gov (United States)

    Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey

    2017-01-01

    Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763

  17. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  18. Meaningful auditory information enhances perception of visual biological motion.

    Science.gov (United States)

    Arrighi, Roberto; Marini, Francesco; Burr, David

    2009-04-30

    Robust perception requires efficient integration of information from our various senses. Much recent electrophysiology points to neural areas responsive to multisensory stimulation, particularly audiovisual stimulation. However, psychophysical evidence for functional integration of audiovisual motion has been ambiguous. In this study we measure perception of an audiovisual form of biological motion, tap dancing. The results show that the audio tap information interacts with visual motion information, but only when in synchrony, demonstrating a functional combination of audiovisual information in a natural task. The advantage of multimodal combination was better than the optimal maximum likelihood prediction.

  19. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  20. Tactile enhancement of auditory and visual speech perception in untrained perceivers

    Science.gov (United States)

    Gick, Bryan; Jóhannsdóttir, Kristín M.; Gibraiel, Diana; Mühlbauer, Jeff

    2008-01-01

    A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, within-subject analysis indicates that individual subjects who benefit more from tactile information in one cross-modal condition tend to benefit less from tactile information in the other. PMID:18396924

  1. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  2. Mechanisms Underlying Auditory Hallucinations—Understanding Perception without Stimulus

    Directory of Open Access Journals (Sweden)

    Sukhwinder S. Shergill

    2013-04-01

    Full Text Available Auditory verbal hallucinations (AVH are a common phenomenon, occurring in the “healthy” population as well as in several mental illnesses, most notably schizophrenia. Current thinking supports a spectrum conceptualisation of AVH: several neurocognitive hypotheses of AVH have been proposed, including the “feed-forward” model of failure to provide appropriate information to somatosensory cortices so that stimuli appear unbidden, and an “aberrant memory model” implicating deficient memory processes. Neuroimaging and connectivity studies are in broad agreement with these with a general dysconnectivity between frontotemporal regions involved in language, memory and salience properties. Disappointingly many AVH remain resistant to standard treatments and persist for many years. There is a need to develop novel therapies to augment existing pharmacological and psychological therapies: transcranial magnetic stimulation has emerged as a potential treatment, though more recent clinical data has been less encouraging. Our understanding of AVH remains incomplete though much progress has been made in recent years. We herein provide a broad overview and review of this.

  3. Mechanisms Underlying Auditory Hallucinations—Understanding Perception without Stimulus

    Science.gov (United States)

    Tracy, Derek K.; Shergill, Sukhwinder S.

    2013-01-01

    Auditory verbal hallucinations (AVH) are a common phenomenon, occurring in the “healthy” population as well as in several mental illnesses, most notably schizophrenia. Current thinking supports a spectrum conceptualisation of AVH: several neurocognitive hypotheses of AVH have been proposed, including the “feed-forward” model of failure to provide appropriate information to somatosensory cortices so that stimuli appear unbidden, and an “aberrant memory model” implicating deficient memory processes. Neuroimaging and connectivity studies are in broad agreement with these with a general dysconnectivity between frontotemporal regions involved in language, memory and salience properties. Disappointingly many AVH remain resistant to standard treatments and persist for many years. There is a need to develop novel therapies to augment existing pharmacological and psychological therapies: transcranial magnetic stimulation has emerged as a potential treatment, though more recent clinical data has been less encouraging. Our understanding of AVH remains incomplete though much progress has been made in recent years. We herein provide a broad overview and review of this. PMID:24961419

  4. 纯音听阈正常的言语交流障碍患者听觉事件相关电位分析%Characteristics of Auditory Evoked Event-Related Potentials in Patients of Auditory Disability with Normal Hearing

    Institute of Scientific and Technical Information of China (English)

    梁茂金; 郑亿庆; 杨海弟; 张志刚; 陈俊明

    2011-01-01

    Objective To investigate the auditory evoked event - related potentials characteristics in patients of auditory disability with normal hearing. Methods 10 patients, complaining of difficulty in conversation, especially in noisy backgrounds, were continuously studied. 20 sex - and age-matched healthy volunteers without hearing problems were used as controls. Both the patients and volunteers have normal hearing threshold and middle ear status and distortion product otoacoustic emission(DPOAE) and auditory brainstem responses(ABR). They recieced 128 - channel ERP test with speech stimuli, in quiet and noisy backgrounds, respectively. Results Both the patients and controls had P1-N1-P2 and MMN in quiet. In noisy background, 2 patients had neither P1-N1-P2 nor MMN. In quiet. MMN latencies of the patients were 221. 8±23. 9 ms, significantly prolonged compare to the controls' 200. 4±28.1 ms(P=0. 049). In noisy background. MMN latencies of patients and controls were 267.1±27. 8 ms and 233. 4±25. 8 ma. Respectively, and the difference was statistically significant (P=0. 003). There were no statistical differences in latencies or amplititudes of P1 -N1 - P2, or in amplititudes of MMN between patients and normal controls. Conclusion Prolonged MMN latencies in patients of auditory disability with normal hearing, may indicate the possible existence of the central auditory processing disorders.%目的 初步探讨纯音听阈正常的言语交流障碍患者听觉事件相关电位的特征.方法 因听觉障碍尤其在噪声环境下言语理解困难的患者10例作为患者组,正常对照组为性别、年龄匹配的无听力及交流障碍的健康志愿者20例,所有患者及志愿者纯音听阈、鼓室导抗图、畸变耳声发射( DPOAE)及听性脑干反应(ABR)检测均无异常.两组分别在安静和噪声背景下行128导联言语刺激音的听觉事件相关电位(event- related potentials,ERP)检测,比较两组的ERP成分P1—N1—P2

  5. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  6. Auditory Processing and Speech Perception in Children with Specific Language Impairment: Relations with Oral Language and Literacy Skills

    Science.gov (United States)

    Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…

  7. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information.

  8. PERCEVAL: a Computer-Driven System for Experimentation on Auditory and Visual Perception

    CERN Document Server

    André, Carine; Cavé, Christian; Teston, Bernard

    2007-01-01

    Since perception tests are highly time-consuming, there is a need to automate as many operations as possible, such as stimulus generation, procedure control, perception testing, and data analysis. The computer-driven system we are presenting here meets these objectives. To achieve large flexibility, the tests are controlled by scripts. The system's core software resembles that of a lexical-syntactic analyzer, which reads and interprets script files sent to it. The execution sequence (trial) is modified in accordance with the commands and data received. This type of operation provides a great deal of flexibility and supports a wide variety of tests such as auditory-lexical decision making, phoneme monitoring, gating, phonetic categorization, word identification, voice quality, etc. To achieve good performance, we were careful about timing accuracy, which is the greatest problem in computerized perception tests.

  9. Potencial evocado auditivo tardio relacionado a eventos (P300 na síndrome de Down Late auditory event-related evoked potential (P300 in Down's syndrome patients

    Directory of Open Access Journals (Sweden)

    Carla Patrícia Hernandez Alves Ribeiro César

    2010-04-01

    Full Text Available A síndrome de Down é causada pela trissomia do cromossomo 21 e está associada com alteração do processamento auditivo, distúrbio de aprendizagem e, provavelmente, início precoce de Doença de Alzheimer. OBJETIVO: Avaliar as latências e amplitudes do potencial evocado auditivo tardio relacionado a eventos (P300 e suas alterações em indivíduos jovens adultos com síndrome de Down. MATERIAL E MÉTODO: Estudo de caso prospectivo. Latências e amplitudes do P300 foram avaliadas em 17 indivíduos com síndrome de Down e 34 indivíduos sadios. RESULTADOS: Foram identificadas latências do P300 (N1, P2, N2 e P3 prolongadas e amplitude N2 - P3 diminuída nos indivíduos com síndrome de Down quando comparados ao grupo controle. CONCLUSÃO: Em indivíduos jovens adultos com síndrome de Down ocorre aumento das latências N1, P2, N2 e P3, e diminuição significativa da amplitude N2-P3 do potencial evocado auditivo tardio relacionado a eventos (P300, sugerindo prejuízo da integração da área de associação auditiva com as áreas corticais e subcorticais do sistema nervoso central.Down syndrome is caused by a trisomy of chromosome 21 and is associated with central auditory processing deficit, learning disability and, probably, early-onset Alzheimer's disease. AIM: to evaluate the latencies and amplitudes of evoked late auditory potential related to P300 events and their changes in young adults with Down's syndrome. MATERIALS AND METHODS: Prospective case study. P300 test latency and amplitudes were evaluated in 17 individuals with Down's syndrome and 34 healthy individuals. RESULTS The P300 latency (N1, P2, N2 and P3 was longer and the N2-P3 amplitude was lower in individuals with Down syndrome when compared to those in the control group. CONCLUSION: In young adults with Down syndrome, N1, P2, N2 and P3 latencies of late auditory evoked potential related to P300 events were prolonged, and N2 - P3 amplitudes were significantly reduced

  10. Harmony perception and regularity of spike trains in a simple auditory model

    Science.gov (United States)

    Spagnolo, B.; Ushakov, Y. V.; Dubkov, A. A.

    2013-01-01

    A probabilistic approach for investigating the phenomena of dissonance and consonance in a simple auditory sensory model, composed by two sensory neurons and one interneuron, is presented. We calculated the interneuron's firing statistics, that is the interspike interval statistics of the spike train at the output of the interneuron, for consonant and dissonant inputs in the presence of additional "noise", representing random signals from other, nearby neurons and from the environment. We find that blurry interspike interval distributions (ISIDs) characterize dissonant accords, while quite regular ISIDs characterize consonant accords. The informational entropy of the non-Markov spike train at the output of the interneuron and its dependence on the frequency ratio of input sinusoidal signals is estimated. We introduce the regularity of spike train and suggested the high or low regularity level of the auditory system's spike trains as an indicator of feeling of harmony during sound perception or disharmony, respectively.

  11. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-05-01

    The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals.

  12. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-09-13

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.

  13. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-01-01

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part. PMID:27622584

  14. Auditory Processing in Specific Language Impairment (SLI): Relations With the Perception of Lexical and Phrasal Stress.

    Science.gov (United States)

    Richards, Susan; Goswami, Usha

    2015-08-01

    We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in children with specific language impairment (SLI). Twenty-two children aged between 8 and 12 years participated in this study. Twelve had SLI; 10 were typically developing controls. All children completed psychoacoustic tasks measuring rise time, intensity, frequency, and duration discrimination. They also completed 2 linguistic stress tasks measuring lexical and phrasal stress perception. The SLI group scored significantly below the typically developing controls on both stress perception tasks. Performance on stress tasks correlated with individual differences in auditory sensitivity. Rise time and frequency thresholds accounted for the most unique variance. Digit Span also contributed to task success for the SLI group. The SLI group had difficulties with both acoustic and stress perception tasks. Our data suggest that poor sensitivity to amplitude rise time and sound frequency significantly contributes to the stress perception skills of children with SLI. Other cognitive factors such as phonological memory are also implicated.

  15. Perception and psychological evaluation for visual and auditory environment based on the correlation mechanisms

    Science.gov (United States)

    Fujii, Kenji

    2002-06-01

    In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.

  16. Hierarchical organization of speech perception in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Colin eHumphries

    2014-12-01

    Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.

  17. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  18. Short- and long-term habituation of auditory event-related potentials in the rat [v1; ref status: indexed, http://f1000r.es/1l3

    Directory of Open Access Journals (Sweden)

    Kestutis Gurevicius

    2013-09-01

    Full Text Available An auditory oddball paradigm in humans generates a long-duration cortical negative potential, often referred to as mismatch negativity. Similar negativity has been documented in monkeys and cats, but it is controversial whether mismatch negativity also exists in awake rodents. To this end, we recorded cortical and hippocampal evoked responses in rats during alert immobility under a typical passive oddball paradigm that yields mismatch negativity in humans. The standard stimulus was a 9 kHz tone and the deviant either 7 or 11 kHz tone in the first condition. We found no evidence of a sustained potential shift when comparing evoked responses to standard and deviant stimuli. Instead, we found repetition-induced attenuation of the P60 component of the combined evoked response in the cortex, but not in the hippocampus. The attenuation extended over three days of recording and disappeared after 20 intervening days of rest. Reversal of the standard and deviant tones resulted is a robust enhancement of the N40 component not only in the cortex but also in the hippocampus. Responses to standard and deviant stimuli were affected similarly. Finally, we tested the effect of scopolamine in this paradigm. Scopolamine attenuated cortical N40 and P60 as well as hippocampal P60 components, but had no specific effect on the deviant response. We conclude that in an oddball paradigm the rat demonstrates repetition-induced attenuation of mid-latency responses, which resembles attenuation of the N1-component of human auditory evoked potential, but no mismatch negativity.

  19. Sensory entrainment mechanisms in auditory perception: neural synchronization and cortico-striatal activation

    Directory of Open Access Journals (Sweden)

    Catia M Sameiro-Barbosa

    2016-08-01

    Full Text Available The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations and, potentially, neural activation in the cortico-striatal system.

  20. Association of auditory steady state responses with perception of temporal modulations and speech in noise.

    Science.gov (United States)

    Manju, Venugopal; Gopika, Kizhakke Kodiyath; Arivudai Nambi, Pitchai Muthu

    2014-01-01

    Amplitude modulations in the speech convey important acoustic information for speech perception. Auditory steady state response (ASSR) is thought to be physiological correlate of amplitude modulation perception. Limited research is available exploring association between ASSR and modulation detection ability as well as speech perception. Correlation of modulation detection thresholds (MDT) and speech perception in noise with ASSR was investigated in twofold experiments. 30 normal hearing individuals and 11 normal hearing individuals within age range of 18-24 years participated in experiments 1 and 2, respectively. MDTs were measured using ASSR and behavioral method at 60 Hz, 80 Hz, and 120 Hz modulation frequencies in the first experiment. ASSR threshold was obtained by estimating the minimum modulation depth required to elicit ASSR (ASSR-MDT). There was a positive correlation between behavioral MDT and ASSR-MDT at all modulation frequencies. In the second experiment, ASSR for amplitude modulation (AM) sweeps at four different frequency ranges (30-40 Hz, 40-50 Hz, 50-60 Hz, and 60-70 Hz) was recorded. Speech recognition threshold in noise (SRTn) was estimated using staircase procedure. There was a positive correlation between amplitude of ASSR for AM sweep with frequency range of 30-40 Hz and SRTn. Results of the current study suggest that ASSR provides substantial information about temporal modulation and speech perception.

  1. Influence of rhythmic grouping on duration perception: a novel auditory illusion.

    Directory of Open Access Journals (Sweden)

    Eveline Geiser

    Full Text Available This study investigated a potential auditory illusion in duration perception induced by rhythmic temporal contexts. Listeners with or without musical training performed a duration discrimination task for a silent period in a rhythmic auditory sequence. The critical temporal interval was presented either within a perceptual group or between two perceptual groups. We report the just-noticeable difference (difference limen, DL for temporal intervals and the point of subjective equality (PSE derived from individual psychometric functions based on performance of a two-alternative forced choice task. In musically untrained individuals, equal temporal intervals were perceived as significantly longer when presented between perceptual groups than within a perceptual group (109.25% versus 102.5% of the standard duration. Only the perceived duration of the between-group interval was significantly longer than its objective duration. Musically trained individuals did not show this effect. However, in both musically trained and untrained individuals, the relative difference limens for discriminating the comparison interval from the standard interval were larger in the between-groups condition than in the within-group condition (7.3% vs. 5.6% of the standard duration. Thus, rhythmic grouping affected sensitivity to duration changes in all listeners, with duration differences being harder to detect at boundaries of rhythm groups than within rhythm groups. Our results show for the first time that temporal Gestalt induces auditory duration illusions in typical listeners, but that musical experts are not susceptible to this effect of rhythmic grouping.

  2. Influence of rhythmic grouping on duration perception: a novel auditory illusion.

    Science.gov (United States)

    Geiser, Eveline; Gabrieli, John D E

    2013-01-01

    This study investigated a potential auditory illusion in duration perception induced by rhythmic temporal contexts. Listeners with or without musical training performed a duration discrimination task for a silent period in a rhythmic auditory sequence. The critical temporal interval was presented either within a perceptual group or between two perceptual groups. We report the just-noticeable difference (difference limen, DL) for temporal intervals and the point of subjective equality (PSE) derived from individual psychometric functions based on performance of a two-alternative forced choice task. In musically untrained individuals, equal temporal intervals were perceived as significantly longer when presented between perceptual groups than within a perceptual group (109.25% versus 102.5% of the standard duration). Only the perceived duration of the between-group interval was significantly longer than its objective duration. Musically trained individuals did not show this effect. However, in both musically trained and untrained individuals, the relative difference limens for discriminating the comparison interval from the standard interval were larger in the between-groups condition than in the within-group condition (7.3% vs. 5.6% of the standard duration). Thus, rhythmic grouping affected sensitivity to duration changes in all listeners, with duration differences being harder to detect at boundaries of rhythm groups than within rhythm groups. Our results show for the first time that temporal Gestalt induces auditory duration illusions in typical listeners, but that musical experts are not susceptible to this effect of rhythmic grouping.

  3. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.

    Science.gov (United States)

    Tonelli, Alessia; Gori, Monica; Brayda, Luca

    2016-01-01

    We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.

  4. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  5. Auditory-visual speech perception in three- and four-year-olds and its relationship to perceptual attunement and receptive vocabulary.

    Science.gov (United States)

    Erdener, Doğu; Burnham, Denis

    2017-06-06

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.

  6. A bio-inspired auditory perception model for amplitude-frequency clustering (keynote Paper)

    Science.gov (United States)

    Arena, Paolo; Fortuna, Luigi; Frasca, Mattia; Ganci, Gaetana; Patane, Luca

    2005-06-01

    In this paper a model for auditory perception is introduced. This model is based on a network of integrate-and-fire and resonate-and-fire neurons and is aimed to control the phonotaxis behavior of a roving robot. The starting point is the model of phonotaxis in Gryllus Bimaculatus: the model consists of four integrate-and-fire neurons and is able of discriminating the calling song of male cricket and orienting the robot towards the sound source. This paper aims to extend the model to include an amplitude-frequency clustering. The proposed spiking network shows different behaviors associated with different characteristics of the input signals (amplitude and frequency). The behavior implemented on the robot is similar to the cricket behavior, where some frequencies are associated with the calling song of male crickets, while other ones indicate the presence of predators. Therefore, the whole model for auditory perception is devoted to control different responses (attractive or repulsive) depending on the input characteristics. The performance of the control system has been evaluated with several experiments carried out on a roving robot.

  7. Training Level Does Not Affect Auditory Perception of The Magnitude of Ball Spin in Table Tennis

    Directory of Open Access Journals (Sweden)

    Santos Daniel P. R.

    2017-01-01

    Full Text Available Identifying the trajectory and spin of the ball with speed and accuracy is critical for good performance in table tennis. The aim of this study was to analyze the ability of table tennis players presenting different levels of training/experience to identify the magnitude of the ball spin from the sound produced when the racket hit the ball. Four types of “forehand” contact sounds were collected in the laboratory, defined as: Fast Spin (spinning ball forward at 140 r/s; Medium Spin (105 r/s; Slow Spin (84 r/s; and Flat Hit (less than 60 r/s. Thirty-four table tennis players of both sexes (24 men and 10 women aged 18-40 years listened to the sounds and tried to identify the magnitude of the ball spin. The results revealed that in 50.9% of the cases the table tennis players were able to identify the ball spin and the observed number of correct answers (10.2 was significantly higher (χ2 = 270.4, p <0.05 than the number of correct answers that could occur by chance. On the other hand, the results did not show any relationship between the level of training/experience and auditory perception of the ball spin. This indicates that auditory information contributes to identification of the magnitude of the ball spin, however, it also reveals that, in table tennis, the level of training does not interfere with the auditory perception of the ball spin.

  8. [Development of early auditory and speech perception skills within one year after cochlear implantion in prelingual deaf children].

    Science.gov (United States)

    Fu, Ying; Chen, Yuan; Xi, Xin; Hong, Mengdi; Chen, Aiting; Wang, Qian; Wong, Lena

    2015-04-01

    To investigate the development of early auditory capability and speech perception in the prelingual deaf children after cochlear implantation, and to study the feasibility of currently available Chinese assessment instruments for the evaluation of early auditory skill and speech perception in hearing-impaired children. A total of 83 children with severe-to-profound prelingual hearing impairment participated in this study. Participants were divided into four groups according to the age for surgery: A (1-2 years), B (2-3 years), C (3-4 years) and D (4-5 years). The auditory skill and speech perception ability of CI children were evaluated by trained audiologists using the infant-toddler/meaningful auditory integration scale (IT-MAIS/MAIS) questionnaire, the Mandarin Early Speech Perception (MESP) test and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The questionnaires were used in face to face interviews with the parents or guardians. Each child was assessed before the operation and 3 months, 6 months, 12 months after switch-on. After cochlear implantation, early postoperative auditory development and speech perception gradually improved. All MAIS/IT-MAIS scores showed a similar increasing trend with the rehabilitation duration (F=5.743, P=0.007). Preoperative and post operative MAIS/IT-MAIS scores of children in age group C (3-4 years) was higher than that of other groups. Children who had longer hearing aid experience before operation demonstrated higher MAIS/IT-MAIS scores than those with little or no hearing aid experience (F=4.947, P=0.000). The MESP test showed that, children were not able to perceive speech as well as detecting speech signals. However as the duration of CI use increased, speech perception ability also improved substantially. However, only about 40% of the subjects could be evaluated using the most difficult subtest on the MPSI in quiet at 12 months after switch-on. As MCR decreased, the proportion of children who could be tested

  9. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  10. Vocal development and auditory perception in CBA/CaJ mice

    Science.gov (United States)

    Radziwon, Kelly E.

    Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly

  11. Auditory Sensitivity, Speech Perception, L1 Chinese, and L2 English Reading Abilities in Hong Kong Chinese Children

    Science.gov (United States)

    Zhang, Juan; McBride-Chang, Catherine

    2014-01-01

    A 4-stage developmental model, in which auditory sensitivity is fully mediated by speech perception at both the segmental and suprasegmental levels, which are further related to word reading through their associations with phonological awareness, rapid automatized naming, verbal short-term memory and morphological awareness, was tested with…

  12. Open your eyes and listen carefully. Auditory and audiovisual speech perception and the McGurk effect in aphasia

    NARCIS (Netherlands)

    Klitsch, Julia Ulrike

    2008-01-01

    This dissertation investigates speech perception in three different groups of native adult speakers of Dutch; an aphasic and two age-varying control groups. By means of two different experiments it is examined if the availability of visual articulatory information is beneficial to the auditory speec

  13. Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners with Normal Hearing

    Science.gov (United States)

    Millman, Rebecca E.; Mattys, Sven L.

    2017-01-01

    Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…

  14. Open your eyes and listen carefully. Auditory and audiovisual speech perception and the McGurk effect in aphasia

    NARCIS (Netherlands)

    Klitsch, Julia Ulrike

    2008-01-01

    This dissertation investigates speech perception in three different groups of native adult speakers of Dutch; an aphasic and two age-varying control groups. By means of two different experiments it is examined if the availability of visual articulatory information is beneficial to the auditory speec

  15. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  16. The Development and Validation of an Auditory Perception Test in Spanish for Hispanic Children Receiving Reading Instruction in Spanish.

    Science.gov (United States)

    Morrison, James A.; Michael, William B.

    1982-01-01

    A Spanish auditory perception test, La Prueba de Analisis Auditivo, was developed and administered to 158 Spanish-speaking Latino children, kindergarten through grade 3. Psychometric data for the test are presented, including its relationship to SOBER, a criterion-referenced Spanish reading measure. (Author/BW)

  17. The effects of blurred vision on auditory-visual speech perception in younger and older adults.

    Science.gov (United States)

    Legault, Isabelle; Gagné, Jean-Pierre; Rhoualem, Wafaa; Anderson-Gosselin, Penny

    2010-12-01

    Speech understanding is improved when the observer can both see and hear the talker. This study compared the effects of reduced visual acuity on auditory-visual (AV) speech-recognition in noise among younger and older adults. Two groups of participants performed a closed-set sentence-recognition task in one auditory-alone (A-alone) condition and under three AV conditions: normal visual acuity (6/6), and with blurred vision to simulate a 6/30 and 6/60 visual impairment. The results showed that (1) the addition of visual speech cues improved speech-perception relative to the A-alone condition, (2) under the AV conditions, performance declined as the amount of blurring increased, (3) even under the AV condition that simulated a visual acuity of 6/60, the speech recognition scores were significantly higher than those obtained under the A-alone condition, and (4) generally, younger adults obtained higher scores than older adults under all conditions. Our results demonstrate the benefits of visual cues to enhance speech understanding even when visual acuity is not optimal.

  18. Hybrid fNIRS-EEG based classification of auditory and visual perception processes

    Directory of Open Access Journals (Sweden)

    Felix ePutze

    2014-11-01

    Full Text Available For multimodal Human-Computer Interaction (HCI, it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI which uses Electroencephalography (EEG and functional Near Infrared Spectroscopy (fNIRS to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. We present cross validation evaluation results for different classification conditions. We show that our subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6% and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.

  19. Auditory enhancement of visual perception at threshold depends on visual abilities.

    Science.gov (United States)

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient.

  20. (A)musicality in Williams syndrome: examining relationships among auditory perception, musical skill, and emotional responsiveness to music.

    Science.gov (United States)

    Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M

    2013-01-01

    Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.

  1. The Effects of Attention and Genders on P3 Component of Auditory Event-related Potential%不同注意状态及性别对听觉事件相关电位P3影响的研究

    Institute of Scientific and Technical Information of China (English)

    胡旭君; 李芳芳; 刘婷

    2015-01-01

    Objective To observe the effects of genders and attention on the latencies and amplitudes of P3 and to assess the potential clinical value of P3.Methods Forty normal-hearing college students were selected as the subjects, including 20 males and 20 females (40 ears). The P3s were recorded under the standard (auditory), reading (auditory and visual) and tasting (auditory and tasting) status respectively.Results Genders had no significant effects on the latencies and amplitudes of P3 (P>0.05). There was no significant difference in P3 latencies and amplitudes among the three status (P>0.05). Conclusion The P3 component of auditory event-related potential is a stable and objective electrophysiological parameter. There was no significant difference in P3 between young men and women. It is supposed that appropriate attractive stimulations can be helpful in performing P3 test for pediatric subjects.%目的:探讨性别及不同注意状态与听觉事件相关电位P3的关系,考察这些因素对P3潜伏期、振幅的影响,研究P3的临床应用价值。方法选取听力正常在校大学生40例,男女各20例(40耳)。分别在标准状态(听觉)、阅读状态(听觉加视觉)和味觉状态(听觉加味觉)下进行听觉事件相关电位P3测试,记录测试结果。结果①性别对P3潜伏期和振幅无影响(P>0.05);②三种状态下P3潜伏期和振幅无显著性差异(P>0.05)。结论听觉事件相关电位P3是一客观稳定的电生理检测指标,不受性别及注意状态影响,推测可在低龄受试者P3测试中适当给予其感兴趣的刺激以帮助测试顺利进行。

  2. Impact of hearing aid use on auditory perception and verbal short-term memory in children with bimodal stimulation

    Directory of Open Access Journals (Sweden)

    Ostojić Sanja

    2015-01-01

    Full Text Available Introduction: The combination of electric stimulation from cochlear implant (CI with acoustic stimulation from hearing aid (HA, otherwise known as bimodal hearing, may provide several binaural benefits including binaural summation, binaural squelch, reduction of the head shadow effect, and improved localization. Purpose: This study investigated the influence of preoperative rehabilitation and bilateral HA use, bimodal stimulation post-implantation (CI on one ear and HA on the non-implanted ear and hearing thresholds in the verbal short-term memory. Method: Immediate verbal memory test for Serbian language consisting of four subtests was used for auditory perception testing on 21 pre-lingually deaf children. Results: Duration of bimodal hearing proved to be significant in the terms of auditory perception and verbal short-term memory. Mid- and high-frequency amplified thresholds on the non-implanted ear were correlated with poorer perception and reproduction of monosyllables and nonsense words. Conclusion: Duration of bimodal hearing proved to be significant in the terms of auditory perception, speech reproduction and semantic ability. Patients with a unilateral cochlear implant who have measurable residual hearing in the non-implanted ear should be individually fitted with a hearing aid in that ear, to improve speech perception and maximize binaural sensitivity.

  3. Auditory/Verbal hallucinations, speech perception neurocircuitry, and the social deafferentation hypothesis.

    Science.gov (United States)

    Hoffman, Ralph E

    2008-04-01

    Auditory/verbal hallucinations (AVHs) are comprised of spoken conversational speech seeming to arise from specific, nonself speakers. One hertz repetitive transcranial magnetic stimulation (rTMS) reduces excitability in the brain region stimulated. Studies utilizing 1-Hz rTMS delivered to the left temporoparietal cortex, a brain area critical to speech perception, have demonstrated statistically significant improvements in AVHs relative to sham simulation. A novel mechanism of AVHs is proposed whereby dramatic pre-psychotic social withdrawal prompts neuroplastic reorganization by the "social brain" to produce spurious social meaning via hallucinations of conversational speech. Preliminary evidence supporting this hypothesis includes a very high rate of social withdrawal emerging prior to the onset of frank psychosis in patients who develop schizophrenia and AVHs. Moreover, reduced AVHs elicited by temporoparietal 1-Hz rTMS are likely to reflect enhanced long-term depression. Some evidence suggests a loss of long-term depression following experimentally-induced deafferentation. Finally, abnormal cortico-cortical coupling is associated with AVHs and also is a common outcome of deafferentation. Auditory/verbal hallucinations (AVHs) of spoken speech or "voices" are reported by 60-80% of persons with schizophrenia at various times during the course of illness. AVHs are associated with high levels of distress, functional disability, and can lead to violent acts. Among patients with AVHs, these symptoms remain poorly or incompletely responsive to currently available treatments in approximately 25% of cases. For patients with AVHs who do respond to antipsychotic drugs, there is a very high likelihood that these experiences will recur in subsequent episodes. A more precise characterization of underlying pathophysiology may lead to more efficacious treatments.

  4. The effect of auditory perception training on reading performance of the 8-9-year old female students with dyslexia: A preliminary study

    Directory of Open Access Journals (Sweden)

    Nafiseh Vatandoost

    2014-01-01

    Full Text Available Background and Aim: Dyslexia is the most common learning disability. One of the main factors have role in this disability is auditory perception imperfection that cause a lot of problems in education. We aimed to study the effect of auditory perception training on reading performance of female students with dyslexia at the third grade of elementary school.Methods: Thirty-eight female students at the third grade of elementary schools of Khomeinishahr City, Iran, were selected by multistage cluster random sampling of them, 20 students which were diagnosed dyslexic by Reading test and Wechsler test, devided randomly to two equal groups of experimental and control. For experimental group, during ten 45-minute sessions, auditory perception training were conducted, but no intervention was done for control group. An participants were re-assessed by Reading test after the intervention (pre- and post- test method. Data were analyed by covariance test.Results: The effect of auditory perception training on reading performance (81% was significant (p<0.0001 for all subtests execpt the separate compound word test.Conclusion: Findings of our study confirm the hypothesis that auditory perception training effects on students' functional reading. So, auditory perception training seems to be necessary for the students with dyslexia.

  5. Differential Allocation of Attention During Speech Perception in Monolingual and Bilingual Listeners

    OpenAIRE

    Astheimer, Lori B.; Berkes, Matthias; Bialystok, Ellen

    2015-01-01

    Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times...

  6. Auditory, Visual, and Auditory-Visual Perception of Emotions by Individuals with Cochlear Implants, Hearing Aids, and Normal Hearing

    Science.gov (United States)

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…

  7. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    Science.gov (United States)

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  8. Inverse Effectiveness and Multisensory Interactions in Visual Event-Related Potentials with Audiovisual Speech

    Science.gov (United States)

    Bushmakin, Maxim; Kim, Sunah; Wallace, Mark T.; Puce, Aina; James, Thomas W.

    2013-01-01

    In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing. PMID:22367585

  9. Effects of auditory vection speed and directional congruence on perceptions of visual vection

    Science.gov (United States)

    Gagliano, Isabella Alexis

    Spatial disorientation is a major contributor to aircraft mishaps. One potential contributing factor is vection, an illusion of self-motion. Although vection is commonly thought of as a visual illusion, it can also be produced through audition. The purpose of the current experiment was to explore interactions between conflicting visual and auditory vection cues, specifically with regard to the speed and direction of rotation. The ultimate goal was to explore the extent to which aural vection could diminish or enhance the perception of visual vection. The study used a 3 x 2 within-groups factorial design. Participants were exposed to three levels of aural rotation velocity (slower, matched, and faster, relative to visual rotation speed) and two levels of aural rotational congruence (congruent or incongruent rotation) including two control conditions (visual and aural-only). Dependent measures included vection onset time, vection direction judgements, subjective vection strength ratings, vection speed ratings, and horizontal nystagmus frequency. Subjective responses to motion were assessed pre and post treatment, and oculomotor responses were assessed before, during, and following exposure to circular vection. The results revealed a significant effect of stimulus condition on vection strength. Specifically, directionally-congruent aural-visual vection resulted in significantly stronger vection than visual and aural vection alone. Perceptions of directionally-congruent aural-visual vection were slightly stronger vection than directionally-incongruent aural-visual vection, but not significantly so. No significant effects of aural rotation velocity on vection strength were observed. The results suggest directionally-incongruent aural vection could be used as a countermeasure for visual vection and directionally-congruent aural vection could be used to improve vection in virtual environments, provided further research is done.

  10. GRM7 variants associated with age-related hearing loss based on auditory perception.

    Science.gov (United States)

    Newman, Dina L; Fisher, Laurel M; Ohmen, Jeffrey; Parody, Robert; Fong, Chin-To; Frisina, Susan T; Mapes, Frances; Eddins, David A; Robert Frisina, D; Frisina, Robert D; Friedman, Rick A

    2012-12-01

    Age-related hearing impairment (ARHI), or presbycusis, is a common condition of the elderly that results in significant communication difficulties in daily life. Clinically, it has been defined as a progressive loss of sensitivity to sound, starting at the high frequencies, inability to understand speech, lengthening of the minimum discernable temporal gap in sounds, and a decrease in the ability to filter out background noise. The causes of presbycusis are likely a combination of environmental and genetic factors. Previous research into the genetics of presbycusis has focused solely on hearing as measured by pure-tone thresholds. A few loci have been identified, based on a best ear pure-tone average phenotype, as having a likely role in susceptibility to this type of hearing loss; and GRM7 is the only gene that has achieved genome-wide significance. We examined the association of GRM7 variants identified from the previous study, which used an European cohort with Z-scores based on pure-tone thresholds, in a European-American population from Rochester, NY (N = 687), and used novel phenotypes of presbycusis. In the present study mixed modeling analyses were used to explore the relationship of GRM7 haplotype and SNP genotypes with various measures of auditory perception. Here we show that GRM7 alleles are associated primarily with peripheral measures of hearing loss, and particularly with speech detection in older adults.

  11. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    Science.gov (United States)

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by

  12. The effect of jogging on P300 event related potentials.

    Science.gov (United States)

    Nakamura, Y; Nishimoto, K; Akamatu, M; Takahashi, M; Maruyama, A

    1999-03-01

    Physical exercise has beneficial effects not only on cardiovascular system and fat metabolism, may also directly effect the cognitive process. We studied the effect of physical exercise on cognitive processes by measuring the P300 event related-potential (ERP) after jogging. Seven well-trained joggers were enrolled in this study and the P300 potentials using auditory oddball paradigm. ERPs were measured before and after 30 minutes of jogging. The amplitude of the P300 significantly increased after jogging compared to values recorded before jogging. These findings suggest that jogging has the effect of facilitating cognitive processes involved in generation of the P300.

  13. Effect of 24 Hours of Sleep Deprivation on Auditory and Linguistic Perception: A Comparison among Young Controls, Sleep-Deprived Participants, Dyslexic Readers, and Aging Adults

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-01-01

    Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…

  14. Effect of 24 Hours of Sleep Deprivation on Auditory and Linguistic Perception: A Comparison among Young Controls, Sleep-Deprived Participants, Dyslexic Readers, and Aging Adults

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-01-01

    Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…

  15. 采用纯音和汉语言语声刺激测试听觉事件相关电位的对比研究%Comparative study of pure tone stiumuli and Chinese speech stiumuli on auditory event-related potentials

    Institute of Scientific and Technical Information of China (English)

    梁勇; 欧阳天斌

    2008-01-01

    Objective To compare the characteristics of auditory event-related potentials(AERP) evoked by pure tone stimuli and Chinese speech stimuli respectively,and to explore the feasibility of using Chinese speech stimuli to evoke AERP for Chinese.Methods AERP were tested by both Chinese speech and pure tone as stimuli in normal young participants(83 ears in 44 young postgraduate students),then each AERP wave form were scored.The latencies,amplitudes and scores of AERP evoked by speech stimuli were compared with by pure tone stimuli.Resuits Typical waves of AERP were recorded and identified more easily with speech stimuli than pure tone stimuli.moreover.the differences were statistically significance(X2=4.0,P=0.039).The latency and amplitude of P3 evoked by both Chinese speech stimuli and pure tone stimuli in the 72 ears were no significant difference(P>0.05).But the mean scores of AERP evoked by speech stimuli were significantly higher than those evoked by pure tone stimuli(t=6.57,p=0.000).N2 and P3 latency in left ear evoked by speech stimuli were significantly shorter than those evoked by pure tone stimuli(P=0.002,P=0.003).However there were no significant differences in right ear(P>0.05).Conclusions Chinese speech stimuli were more consistent with requests of AERP test and custom of Chinese spoken language,so the Chinese speech stimuli was more available for Chinese's AERP test than pure tone stimuli.%目的 比较纯音和汉语言语声刺激诱发的听觉事件相关电位(auditory event-related potentials,AERP)波形,探讨用汉语言语声刺激测试中国人AERP的可行性.方法 采用汉语言语和纯音作为声刺激对正常青年人(44名,83耳)进行AERP测试,对波形进行评分.比较言语和纯音声刺激诱发的AERP的潜伏期、幅值及波形评分.结果 言语声刺激比纯音更容易记录到典型AERP波形,差异有统计学意义(X2=4.0,P=0.039).分析言语及纯音刺激均记录到的72组AERP波形,言

  16. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  17. Musical experience, auditory perception and reading-related skills in children.

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.

  18. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  19. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  20. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    Science.gov (United States)

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  1. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  2. Beta/gamma oscillations and event-related potentials indicate aberrant multisensory processing in schizophrenia

    Directory of Open Access Journals (Sweden)

    Johanna Balz

    2016-12-01

    Full Text Available Recent behavioral and neuroimaging studies have suggested multisensory processing deficits in patients with schizophrenia (SCZ. Thus far, the neural mechanisms underlying these deficits are not well understood. Previous studies with unisensory stimulation have shown altered neural oscillations in SCZ. As such, aberrant oscillations could contribute to aberrant multisensory processing in this patient group. To test this assumption, we conducted an electroencephalography (EEG study in 15 SCZ and 15 control participants in whom we examined neural oscillations and event-related potentials (ERPs in the sound-induced flash illusion (SIFI. In the SIFI multiple auditory stimuli that are presented alongside a single visual stimulus can induce the illusory percept of multiple visual stimuli. In SCZ and control participants we compared ERPs and neural oscillations between trials that induced an illusion and trials that did not induce an illusion. On the behavioral level, SCZ (55.7 % and control participants (55.4 % did not significantly differ in illusion rates. The analysis of ERPs revealed diminished amplitudes and altered multisensory processing in SCZ compared to controls around 135 ms after stimulus onset. Moreover, the analysis of neural oscillations revealed altered 25-35 Hz power after 100 to 150 ms over occipital scalp for SCZ compared to controls. Our findings extend previous observations of aberrant neural oscillations in unisensory perception paradigms. They suggest that altered ERPs and altered occipital beta/gamma band power reflect aberrant multisensory processing in SCZ.

  3. Proprioceptive event related potentials: gating and task effects

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2005-01-01

    The integration of proprioception with vision, touch or audition is considered basic to the developmental formation of perceptions, conceptual objects and the creation of cognitive schemes. Thus, mapping of proprioceptive information processing is important in cognitive research. A stimulus...... of a brisk change of weight on a hand held load elicit a proprioceptive evoked potential (PEP). Here this is used to examine early and late information processing related to weight discrimination by event related potentials (ERP)....

  4. From perception to metacognition: Auditory and olfactory functions in early blind, late blind, and sighted individuals

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2016-09-01

    Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.

  5. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Science.gov (United States)

    Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory

    2013-01-01

    Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  6. From Perception to Metacognition: Auditory and Olfactory Functions in Early Blind, Late Blind, and Sighted Individuals

    Science.gov (United States)

    Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria

    2016-01-01

    Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884

  7. Event-Related Potentials to an English/Spanish Syllabic Contrast in Mexican 10–13-Month-Old Infants

    Science.gov (United States)

    Rivera-Gaxiola, Maritza; Garcia-Sierra, Adrian; Lara-Ayala, Lourdes; Cadena, Cesar; Jackson-Maldonado, Donna; Kuhl, Patricia K.

    2012-01-01

    We report brain electrophysiological responses from 10- to 13-month-old Mexican infants while listening to native and foreign CV-syllable contrasts differing in Voice Onset Time (VOT). All infants showed normal auditory event-related potential (ERP) components. Our analyses showed ERP evidence that Mexican infants are capable of discriminating their native sounds as well as the acoustically salient (aspiration) foreign contrast. The study showed that experience with native language influences VOT perception in Spanish learning infants. The acoustic salience of aspiration is perceived by both Spanish and English learning infants, but exposure provides additional phonetic status to this native-language feature for English learning infants. The effects of early experience and neural commitment as well as the impact of acoustic salience are further discussed. PMID:22577579

  8. A comparison of event-related potential of humans and rats elicited by a serial feature-positive discrimination task

    NARCIS (Netherlands)

    Sambeth, A.; Maes, J.H.R.

    2006-01-01

    The purpose of this experiment was to compare components of the human and rat auditory event-related potential (ERP) in a serial feature-positive discrimination task. Subjects learned to respond to an auditory target stimulus when it followed a visual feature (X→A+), but to not respond when it was

  9. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  10. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  11. A new method for detecting interactions between the senses in event-related potentials

    DEFF Research Database (Denmark)

    Gondan, Matthias; Röder, B.

    2006-01-01

    Event-related potentials (ERPs) can be used in multisensory research to determine the point in time when different senses start to interact, for example, the auditory and the visual system. For this purpose, the ERP to bimodal stimuli (AV) is often compared to the sum of the ERPs to auditory (A...... - (A + V), but common activity is eliminated because two ERPs are subtracted from two others. With this new comparison technique, the first auditory-visual interaction starts around 80 ms after stimulus onset for the present experimental setting. It is possible to apply the new comparison method...

  12. Changes of auditory event-related potential P300 in patients with reactive depression and depressive neurosis%反应性抑郁症、抑郁性神经症患者听觉事件相关电位P300的改变

    Institute of Scientific and Technical Information of China (English)

    瞿玮; 黄希庭; 吴宗耀; 侯岷

    2005-01-01

    BACKGROUND: Reactive depression and depressive neurosis are both psychogenic emotional disorders, clinically manifested by memory deterioration,impaired concentration, slow reaction, and bradyphrenia, among other cognitive function impairments. These subjective experiences of such patients still need to be backed up for their actual presence by objective laboratory evidences, which,however, have been scarcely available to our current knowledge.OBJECTIVE: To investigate the changes of auditory event-related Potential (AERP) P300 in patients with reactive depression and depressive neurosis, and their correlation with cognitive function impairments.DESIGN: Case-controlled experiment.SETTING: Clinical Center of Psychological Counseling, Department of Neurology, Third Military Medical University of Chinese PLA. PARTICIPANTS: Forty normal control subjects were recruited from the staff of Southwest Hospital, Third Military Medical University between September 1997 and March 1998, including 21 male and 19 female subjects aged 20-50 years. Eighty patients were selected from the agematched outpatients of the same hospital seeking psychological counseling,including 40 with reactive depression (consisting of 29 male and 11 female patients) and 40 with depressive neurosis (consisting of 24 male and 16 female patients). From these patients 10 with abnormal P300 potential were randomly selected for followed-up study, including4 with reactive depression and4 with depressive neurosis.METHODS: All the patients and normal controls were subjected to conventional examination of AERP P300 in the Laboratory of Cerebral Electrophysiology, and psychological evaluation was carried out by specialists with the assistance of Zung Self-Rating Depression Scale.MAIN OUTCOME MEASURES: ① Results of AERP P300 examination and scores of Zung Self-Rating Depression Scale in the 80 patients upon admission and 4 and 8 weeks after treatment, and results of AERP P300 examination in the control group.

  13. P3 Event-Related Potentials and Childhood Maltreatment in Successful and Unsuccessful Psychopaths

    Science.gov (United States)

    Gao, Yu; Raine, Adrian; Schug, Robert A.

    2011-01-01

    Although P3 event-related potential abnormalities have been found in psychopathic individuals, it is unknown whether successful (uncaught) psychopaths and unsuccessful (caught) psychopaths show similar deficits. In this study, P3 amplitude and latency were assessed from a community sample of 121 male adults using an auditory three-stimulus oddball…

  14. P3 Event-Related Potentials and Childhood Maltreatment in Successful and Unsuccessful Psychopaths

    Science.gov (United States)

    Gao, Yu; Raine, Adrian; Schug, Robert A.

    2011-01-01

    Although P3 event-related potential abnormalities have been found in psychopathic individuals, it is unknown whether successful (uncaught) psychopaths and unsuccessful (caught) psychopaths show similar deficits. In this study, P3 amplitude and latency were assessed from a community sample of 121 male adults using an auditory three-stimulus oddball…

  15. Event Related Potentials in the Understanding of Autism Spectrum Disorders: An Analytical Review

    Science.gov (United States)

    Jeste, Shafali S.; Nelson, Charles A., III

    2009-01-01

    In this paper we critically review the literature on the use of event related potentials (ERPs) to elucidate the neural sources of the core deficits in autism. We review auditory and visual ERP studies, and then review the use of ERPs in the investigation of executive function. We conclude that, in autism, impairments likely exist in both low and…

  16. Visual recalibration and selective adaptation in auditory-visual speech perception: Contrasting build-up courses.

    Science.gov (United States)

    Vroomen, Jean; van Linden, Sabine; de Gelder, Béatrice; Bertelson, Paul

    2007-02-01

    Exposure to incongruent auditory and visual speech produces both visual recalibration and selective adaptation of auditory speech identification. In an earlier study, exposure to an ambiguous auditory utterance (intermediate between /aba/ and /ada/) dubbed onto the video of a face articulating either /aba/ or /ada/, recalibrated the perceived identity of auditory targets in the direction of the visual component, while exposure to congruent non-ambiguous /aba/ or /ada/ pairs created selective adaptation, i.e. a shift of perceived identity in the opposite direction [Bertelson, P., Vroomen, J., & de Gelder, B. (2003). Visual recalibration of auditory speech identification: a McGurk aftereffect. Psychological Science, 14, 592-597]. Here, we examined the build-up course of the after-effects produced by the same two types of bimodal adapters, over a 1-256 range of presentations. The (negative) after-effects of non-ambiguous congruent adapters increased monotonically across that range, while those of ambiguous incongruent adapters followed a curvilinear course, going up and then down with increasing exposure. This pattern is discussed in terms of an asynchronous interaction between recalibration and selective adaptation processes.

  17. Event related potentials using visual stimulation.

    Science.gov (United States)

    Varner, J L; Rohrbaugh, J W

    1993-01-01

    Visual patterns are used to elicit event related potentials. Equipment is available for generating visual geometric patterns such as checkerboards. Slides may be used for patterns which are more complex but preparation is costly and time consuming. A variety of programs exist on PC's for making very elaborate color pictures and in most cases the programs are easy to use making them ideal for generating visual patterns for event related potential experiments. A necessary requirement in event related potential experiments is the ability to control and/or determine precisely when the stimulus is presented to the subject. We have observed that timing is a problem with stimuli generated by the PC as a result of the raster scan and use in many cases of high level system calls in the software. This paper describes a technique which allows for precise control of the time of stimulus presentation using the video control signals to the monitor.

  18. Newborn brain event-related potentials revealing atypical processing of sound frequency and the subsequent association with later literacy skills in children with familial dyslexia.

    Science.gov (United States)

    Leppänen, Paavo H T; Hämäläinen, Jarmo A; Salminen, Hanne K; Eklund, Kenneth M; Guttorm, Tomi K; Lohvansuu, Kaisa; Puolakanaho, Anne; Lyytinen, Heikki

    2010-01-01

    The role played by an auditory-processing deficit in dyslexia has been debated for several decades. In a longitudinal study using brain event-related potentials (ERPs) we investigated 1) whether dyslexic children with familial risk background would show atypical pitch processing from birth and 2) how these newborn ERPs later relate to these same children's pre-reading cognitive skills and literacy outcomes. Auditory ERPs were measured at birth for tones varying in pitch and presented in an oddball paradigm (1100 Hz, 12%, and 1000 Hz, 88%). The brain responses of the typically reading control group children (TRC group, N=25) showed clear differentiation between the frequencies, while those of the group of reading disability with familial risk (RDFR, 8 children) and the group of typical readers with familial risk (TRFR, 14 children) did not differentiate between the tones. The ERPs of the latter two groups differed from those of the TRC group. However, the two risk groups also showed a differential hemispheric ERP pattern. Furthermore, newborn ERPs reflecting passive change detection were associated with phonological skills and letter knowledge prior to school age and with phoneme duration perception, reading speed (RS) and spelling accuracy in the 2nd grade of school. The early obligatory response was associated with more general pre-school language skills, as well as with RS and reading accuracy (RA). Results suggest that a proportion of dyslexic readers with familial risk background are affected by atypical auditory processing. This is already present at birth and also relates to pre-reading phonological processing and speech perception. These early differences in auditory processing could later affect phonological representations and reading development. However, atypical auditory processing is unlikely to suffice as a sole explanation for dyslexia but rather as one risk factor, dependent on the genetic profile of the child. Copyright © 2010 Elsevier Srl. All

  19. Olfactory short-term memory encoding and maintenance - an event-related potential study.

    Science.gov (United States)

    Lenk, Steffen; Bluschke, Annet; Beste, Christian; Iannilli, Emilia; Rößner, Veit; Hummel, Thomas; Bender, Stephan

    2014-09-01

    This study examined whether the memory encoding and short term maintenance of olfactory stimuli is associated with neurophysiological activation patterns which parallel those described for sensory modalities such as vision and auditory. We examined olfactory event-related potentials in an olfactory change detection task in twenty-four healthy adults and compared the measured activation to that found during passive olfactory stimulation. During the early olfactory post-processing phase, we found a sustained negativity over bilateral frontotemporal areas in the passive perception condition which was enhanced in the active memory task. There was no significant lateralization in either experimental condition. During the maintenance interval at the end of the delay period, we still found sustained activation over bilateral frontotemporal areas which was more negative in trials with correct - as compared to incorrect - behavioural responses. This was complemented by a general significantly stronger frontocentral activation. Summarizing, we were able to show that olfactory short term memory involves a parallel sequence of activation as found in other sensory modalities. In addition to olfactory-specific frontotemporal activations in the memory encoding phase, we found slow cortical potentials over frontocentral areas during the memory maintenance phase indicating the activation of a supramodal memory maintenance system. These findings could represent the neurophysiological underpinning of the 'olfactory flacon', the olfactory counter-part to the visual sketchpad and phonological loop embedded in Baddeley's working memory model.

  20. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  1. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  2. Bird brains and songs : Neural mechanisms of auditory memory and perception in zebra finches

    NARCIS (Netherlands)

    Gobes, S.M.H.

    2009-01-01

    Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory lear

  3. Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I

    Science.gov (United States)

    1991-01-22

    of Illi- Pisoni, D. B. (1973). "Auditory and phonetic memory codes in the dis- nois, Urbana , IL. crimination of consonants and vowels," Percep...following carrier sentences, respectively: Greek: [Oa po - ksana]. "I will say- again." German: lch sage - noch einmal. "I say- one more time." 228 Vowels

  4. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  5. 选择性视觉认知变化时事件相关电位的主成分分析%The vision of select perception changed on the event related potentials using principal components analysis

    Institute of Scientific and Technical Information of China (English)

    贾渭泉; 兰晓阳; 崔芳; 杨飞; 于生元

    2013-01-01

    Objective To explore figure-ground perception mechanism, Facilitating the 64 channel decomposition of ERP data into its constituent components; providing direct visualization of the corresponding waveforms for clinical research; Methods Thirty-two normal person with a mean age of 30. 34 years participated in the current study using high - density (64 - channel) ERP recordings. Rubin's vase and lady/carline stimulus were used, analyzing ERP data by PCA. Results The six components of PCA were elicited by the face/vase and lady/ carline responses. Changing figure of perception is relationship to phase reversal of CPA in 100 -450ms. Conclusion Early stages of figure - ground perception take place waveform changing in 100ms. Phase reversal accompanied by whole potentials activity period (100 -450ms) ,be likely to have important physiological meaning to clinical research%目的 探讨“图形-背景”认知机制,将64通道记录电位分解成几个主要成分,为临床研究提供可视化图形和电位分布图.方法 32名平均年龄30.34士4.02岁的健康受试者,用鲁宾花瓶/少妇老太双层含义的两个图形作为刺激任务,64通道高密度记录ERP.采用主成分分析的统计方法对事件相关电位进行分析处理.结果 同一图形两种含义刺激得到6个成分,图形认知含义变化相关波形主成分的位相翻转,从发生时间轴上看,100 ~ 450ms之间均有翻转的电位.结论 “图像-背景”认知从ERP最初的100ms成分就开始发生.电位相位反转并伴随整个电位过程,对临床研究有重要生理意义.

  6. Analysis of event-related potentials (ERP) by damped sinusoids.

    Science.gov (United States)

    Demiralp, T; Ademoglu, A; Istefanopulos, Y; Gülçür, H O

    1998-06-01

    Several researchers propose that event-related potentials (ERPs) can be explained by a superposition of transient oscillations at certain frequency bands in response to external or internal events. The transient nature of the ERP is more suitable to be modelled as a sum of damped sinusoids. These damped sinusoids can be completely characterized by four sets of parameters, namely the amplitude, the damping coefficient, the phase and the frequency. The Prony method is used to estimate these parameters. In this study, the long-latency auditory-evoked potentials (AEP) and the auditory oddball responses (P300) of 10 healthy subjects are analysed by this method. It is shown that the original waveforms can be reconstructed by summing a small number of damped sinusoids. This allows for a parsimonious representation of the ERPs. Furthermore, the method shows that the oddball target responses contain higher amplitude, slower delta and slower damped theta components than those of the AEPs. With this technique, we show that the differentiation of sensory and cognitive potentials are not inherent in their overall frequency content but in their frequency components at certain bands.

  7. Cross-modal speech perception in adults and infants using nonspeech auditory stimuli.

    Science.gov (United States)

    Kuhl, P K; Williams, K A; Meltzoff, A N

    1991-08-01

    Adults and infants were tested for the capacity to detect correspondences between nonspeech sounds and real vowels. The /i/ and /a/ vowels were presented in 3 different ways: auditory speech, silent visual faces articulating the vowels, or mentally imagined vowels. The nonspeech sounds were either pure tones or 3-tone complexes that isolated a single feature of the vowel without allowing the vowel to be identified. Adults perceived an orderly relation between the nonspeech sounds and vowels. They matched high-pitched nonspeech sounds to /i/ vowels and low-pitched nonspeech sounds to /a/ vowels. In contrast, infants could not match nonspeech sounds to the visually presented vowels. Infants' detection of correspondence between auditory and visual speech appears to require the whole speech signal; with development, an isolated feature of the vowel is sufficient for detection of the cross-modal correspondence.

  8. An investigation of the auditory perception of western lowland gorillas in an enrichment study.

    Science.gov (United States)

    Brooker, Jake S

    2016-09-01

    Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc.

  9. Impaired Pitch Perception and Memory in Congenital Amusia: The Deficit Starts in the Auditory Cortex

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jeremie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaetan; Aguera, Pierre-Emmanuel; Daligault, Sebastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-01-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a…

  10. Impaired Pitch Perception and Memory in Congenital Amusia: The Deficit Starts in the Auditory Cortex

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jeremie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaetan; Aguera, Pierre-Emmanuel; Daligault, Sebastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-01-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a…

  11. Visually guided auditory attention in a dynamic "cocktail-party" speech perception task: ERP evidence for age-related differences.

    Science.gov (United States)

    Getzmann, Stephan; Wascher, Edmund

    2017-02-01

    Speech understanding in the presence of concurring sound is a major challenge especially for older persons. In particular, conversational turn-takings usually result in switch costs, as indicated by declined speech perception after changes in the relevant target talker. Here, we investigated whether visual cues indicating the future position of a target talker may reduce the costs of switching in younger and older adults. We employed a speech perception task, in which sequences of short words were simultaneously presented by three talkers, and analysed behavioural measures and event-related potentials (ERPs). Informative cues resulted in increased performance after a spatial change in target talker compared to uninformative cues, not indicating the future target position. Especially the older participants benefited from knowing the future target position in advance, indicated by reduced response times after informative cues. The ERP analysis revealed an overall reduced N2, and a reduced P3b to changes in the target talker location in older participants, suggesting reduced inhibitory control and context updating. On the other hand, a pronounced frontal late positive complex (f-LPC) to the informative cues indicated increased allocation of attentional resources to changes in target talker in the older group, in line with the decline-compensation hypothesis. Thus, knowing where to listen has the potential to compensate for age-related decline in attentional switching in a highly variable cocktail-party environment.

  12. The Role of Sensory Perception, Emotionality and Lifeworld in Auditory Word Processing: Evidence from Congenital Blindness and Synesthesia.

    Science.gov (United States)

    Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina

    2017-06-22

    Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the processing of nouns during an auditory lexical decision task. Participants came from three populations differing in their visual-perceptual experience: congenitally blind persons, word-color synesthetes, and sighted non-synesthetes. Specifically, three features with potential relevance to concreteness were manipulated: sensory perception, emotionality, and Husserlian lifeworld, a concept related to the inner versus the outer world of the self. In addition to a classical concreteness effect, our results revealed a significant effect of lifeworld: words that are closely linked to the internal states of humans were processed faster than words referring to the outside world. When lifeworld was introduced as predictor, there was no effect of emotionality. Concerning participants' perceptual experience, an interaction between participant group and item characteristics was found: the effects of both concreteness and lifeworld were more pronounced for blind compared to sighted participants. We will discuss the results in the context of embodied semantics, and we will propose an approach to concreteness based on the individual's bodily experience and the relatedness of a given concept to the self.

  13. Context-dependent changes in functional connectivity of auditory cortices during the perception of object words

    NARCIS (Netherlands)

    Dam, W.O. van; Dongen, E.V. van; Bekkering, H.; Rueschemeyer, S.A.

    2012-01-01

    Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms

  14. A dynamic auditory-cognitive system supports speech-in-noise perception in older adults.

    Science.gov (United States)

    Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina

    2013-06-01

    Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of an auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we used structural equation modeling to evaluate the interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55-79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants.

    Science.gov (United States)

    Winn, Matthew B; Rhone, Ariane E; Chatterjee, Monita; Idsardi, William J

    2013-01-01

    There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing (NH) accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants (CIs) can do the same. In this study, listeners with NH and listeners with CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /∫/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with CIs not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e., faces) as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with NH. Spectrally-degraded stimuli heard by listeners with NH generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with CIs are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  16. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants

    Directory of Open Access Journals (Sweden)

    Matthew eWinn

    2013-11-01

    Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.

  17. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  18. The Music of Your Emotions: Neural Substrates Involved in Detection of Emotional Correspondence between Auditory and Visual Music Actions

    OpenAIRE

    Petrini, K.; Crabbe, F.; Sheridan, C; Pollick, Frank

    2011-01-01

    In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (music...

  19. Comparative study of three techniques of palatoplasty in patients with cleft of lip and palate via instrumental and auditory-perceptive evaluations

    OpenAIRE

    Paniagua, Lauren Medeiros; Collares, Marcus Vinícius Martins; Costa, Sady Selaimen da

    2010-01-01

    Introduction: Palatoplasty is a surgical procedure that aims at the reconstruction of the soft and/or hard palate. Actually, we dispose of different techniques that look for the bigger stretching of the soft palate joint to the nasofaryngeal wall to contribute in the appropriate operation of the velopharyngeal sphincter. Failure in its closing brings on speech dysfunctions. Objective: To compare the auditory-perceptive' evaluations and instrumental findings in patients with cleft lip and pala...

  20. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  1. Compensation for Coarticulation: Disentangling Auditory and Gestural Theories of Perception of Coarticulatory Effects in Speech

    Science.gov (United States)

    Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.

    2010-01-01

    According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…

  2. AUDITORY PERCEPTION OF MUSICAL SOUNDS BY CHILDREN IN THE FIRST SIX GRADES.

    Science.gov (United States)

    PETZOLD, ROBERT G.

    THE NATURE AND DEVELOPMENT OF CERTAIN FUNDAMENTAL MUSICAL SKILLS WERE STUDIED. THIS STUDY FOCUSED ON AURAL PERCEPTION AS AN INTEGRAL FACTOR IN THE CHILD'S MUSICAL DEVELOPMENT. TWO MAJOR ASPECTS OF THIS 5-YEAR STUDY INCLUDED (1) LONGITUDINAL STUDY OF THREE GROUPS OF CHILDREN AND (2) A SERIES OF 1-YEAR PILOT STUDIES DEALING WITH RHYTHM, TIMBRE, AND…

  3. Context-dependent changes in functional connectivity of auditory cortices during the perception of object words

    NARCIS (Netherlands)

    Dam, W.O. van; Dongen, E.V. van; Bekkering, H.; Rüschemeyer, S.A.

    2012-01-01

    Embodied theories hold that cognitive concepts are grounded in our sensorimotor systems. Specifically, a number of behavioral and neuroimaging studies have buttressed the idea that language concepts are represented in areas involved in perception and action [Pulvermueller, F. Brain mechanisms linkin

  4. Perception of parents about the auditory attention skills of his kid with cleft lip and palate: retrospective study

    Directory of Open Access Journals (Sweden)

    Mondelli, Maria Fernanda Capoani Garcia

    2012-01-01

    Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.

  5. Crossmodal effects of Guqin and piano music on selective attention: an event-related potential study.

    Science.gov (United States)

    Zhu, Weina; Zhang, Junjun; Ding, Xiaojun; Zhou, Changle; Ma, Yuanye; Xu, Dan

    2009-11-27

    To compare the effects of music from different cultural environments (Guqin: Chinese music; piano: Western music) on crossmodal selective attention, behavioral and event-related potential (ERP) data in a standard two-stimulus visual oddball task were recorded from Chinese subjects in three conditions: silence, Guqin music or piano music background. Visual task data were then compared with auditory task data collected previously. In contrast with the results of the auditory task, the early (N1) and late (P300) stages exhibited no differences between Guqin and piano backgrounds during the visual task. Taking our previous study and this study together, we can conclude that: although the cultural-familiar music influenced selective attention both in the early and late stages, these effects appeared only within a sensory modality (auditory) but not in cross-sensory modalities (visual). Thus, the musical cultural factor is more obvious in intramodal than in crossmodal selective attention.

  6. The consistency of crossmodal synchrony perception across the visual, auditory, and tactile senses.

    Science.gov (United States)

    Machulla, Tonja-Katrin; Di Luca, Massimiliano; Ernst, Marc O

    2016-07-01

    Crossmodal judgments of relative timing commonly yield a nonzero point of subjective simultaneity (PSS). Here, we test whether subjective simultaneity is coherent across all pairwise combinations of the visual, auditory, and tactile modalities. To this end, we examine PSS estimates for transitivity: If Stimulus A has to be presented x ms before Stimulus B to result in subjective simultaneity, and B y ms before C, then A and C should appear simultaneous when A precedes C by z ms, where z = x + y. We obtained PSS estimates via 2 different timing judgment tasks-temporal order judgments (TOJs) and synchrony judgments (SJs)-thus allowing us to examine the relationship between TOJ and SJ. We find that (a) SJ estimates do not violate transitivity, and that (b) TOJ and SJ data are linearly related. Together, these findings suggest that both TOJ and SJ access the same perceptual representation of simultaneity and that this representation is globally coherent across the tested modalities. Furthermore, we find that (b) TOJ estimates are intransitive. This is consistent with the proposal that while the perceptual representation of simultaneity is coherent, relative timing judgments that access this representation can at times be incoherent with each other because of postperceptual response biases. (PsycINFO Database Record

  7. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  8. Study on auditory processing and perception in school-age children of 2nd and 3rd grade

    OpenAIRE

    Wohlleben, Bärbel

    2010-01-01

    Disorders in auditory processing have gained importance over the past 15 years in international research as causal factors in problems of speech-language development and of dyslexia. At the present stage of research auditory processing is being investigated for the most part by means of subjective audiometric and psychometric tests. At present there exists no operational and sharply defined criteria for "auditory processing disorder" (APD). Therefore the present study was undertaken to develo...

  9. Auditory stream segregation in children with Asperger syndrome.

    Science.gov (United States)

    Lepistö, T; Kuitunen, A; Sussman, E; Saalasti, S; Jansson-Verkasalo, E; Nieminen-von Wendt, T; Kujala, T

    2009-12-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception.

  10. The effect of amblyopia on visual-auditory speech perception: why mothers may say "Look at me when I'm talking to you".

    Science.gov (United States)

    Burgmeier, Robert; Desai, Rajen U; Farner, Katherine C; Tiano, Benjamin; Lacey, Ryan; Volpe, Nicholas J; Mets, Marilyn B

    2015-01-01

    Children with a history of amblyopia, even if resolved, exhibit impaired visual-auditory integration and perceive speech differently. To determine whether a history of amblyopia is associated with abnormal visual-auditory speech integration. Retrospective observational study at an academic pediatric ophthalmologic clinic with an average of 4 years of follow-up. Participants were at least 3 years of age and without any history of neurologic or hearing disorders. Of 39 children originally in our study, 6 refused to participate. The remaining 33 participants completed the study. Twenty-four participants (mean [SD] age, 7.0 [1.5] years) had a history of amblyopia in 1 eye, with a visual acuity of at least 20/20 in the nonamblyopic eye. Nine controls (mean [SD] age, 8.0 [3.4] years) were recruited from referrals for visually insignificant etiologies or through preschool-screening eye examinations; all had 20/20 in both eyes. Participants were presented with a video demonstrating the McGurk effect (ie, a stimulus presenting an audio track playing the sound /pa/ and a separate video track of a person articulating /ka/). Normal visual-auditory integration produces the perception of hearing a fusion sound /ta/. Participants were asked to report which sound was perceived, /ka/, /pa/, or /ta/. Prevalence of perception of the fusion /ta/ sound. Prior to the study, amblyopic children were hypothesized to less frequently perceive /ta/. The McGurk effect was perceived by 11 of the 24 participants with amblyopia (45.8%) and all 9 controls (100%) (adjusted odds ratio, 22.3 [95% CI, 1.2-426.0]; P = .005). The McGurk effect was perceived by 100% of participants with amblyopia that was resolved by 5 years of age and by 100% of participants whose onset at amblyopia developed at or after 5 years of age. However, only 18.8% of participants with amblyopia that was unresolved by 5 years of age (n = 16) perceived the McGurk effect (adjusted odds ratio, 27.0 [95% CI, 1.1-654.0]; P = .02

  11. Processing of audiovisually congruent and incongruent speech in school-age children with a history of specific language impairment: a behavioral and event-related potentials study.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-09-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we examined perception of both audiovisually congruent and audiovisually incongruent speech in school-age children with a history of SLI (H-SLI), their typically developing (TD) peers, and adults. In the first experiment, all participants watched videos of a talker articulating syllables 'ba', 'da', and 'ga' under three conditions - audiovisual (AV), auditory only (A), and visual only (V). The amplitude of the N1 (but not of the P2) event-related component elicited in the AV condition was significantly reduced compared to the N1 amplitude measured from the sum of the A and V conditions in all groups of participants. Because N1 attenuation to AV speech is thought to index the degree to which facial movements predict the onset of the auditory signal, our findings suggest that this aspect of audiovisual speech perception is mature by mid-childhood and is normal in the H-SLI children. In the second experiment, participants watched videos of audivisually incongruent syllables created to elicit the so-called McGurk illusion (with an auditory 'pa' dubbed onto a visual articulation of 'ka', and the expectant perception being that of 'ta' if audiovisual integration took place). As a group, H-SLI children were significantly more likely than either TD children or adults to hear the McGurk syllable as 'pa' (in agreement with its auditory component) than as 'ka' (in agreement with its visual component), suggesting that susceptibility to the McGurk illusion is reduced in at least some children with a history of SLI. Taken together, the results of the two experiments argue against global audiovisual integration impairment in children with a

  12. Functionally integrated neural processing of linguistic and talker information: An event-related fMRI and ERP study.

    Science.gov (United States)

    Zhang, Caicai; Pugh, Kenneth R; Mencl, W Einar; Molfese, Peter J; Frost, Stephen J; Magnuson, James S; Peng, Gang; Wang, William S-Y

    2016-01-01

    Speech signals contain information of both linguistic content and a talker's voice. Conventionally, linguistic and talker processing are thought to be mediated by distinct neural systems in the left and right hemispheres respectively, but there is growing evidence that linguistic and talker processing interact in many ways. Previous studies suggest that talker-related vocal tract changes are processed integrally with phonetic changes in the bilateral posterior superior temporal gyrus/superior temporal sulcus (STG/STS), because the vocal tract parameter influences the perception of phonetic information. It is yet unclear whether the bilateral STG is also activated by the integral processing of another parameter - pitch, which influences the perception of lexical tone information and is related to talker differences in tone languages. In this study, we conducted separate functional magnetic resonance imaging (fMRI) and event-related potential (ERP) experiments to examine the spatial and temporal loci of interactions of lexical tone and talker-related pitch processing in Cantonese. We found that the STG was activated bilaterally during the processing of talker changes when listeners attended to lexical tone changes in the stimuli and during the processing of lexical tone changes when listeners attended to talker changes, suggesting that lexical tone and talker processing are functionally integrated in the bilateral STG. It extends the previous study, providing evidence for a general neural mechanism of integral phonetic and talker processing in the bilateral STG. The ERP results show interactions of lexical tone and talker processing 500-800ms after auditory word onset (a simultaneous posterior P3b and a frontal negativity). Moreover, there is some asymmetry in the interaction, such that unattended talker changes affect linguistic processing more than vice versa, which may be related to the ambiguity that talker changes cause in speech perception and/or attention bias

  13. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  14. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  15. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  16. Cognitive event-related potentials and brain magnetic resonance imaging in HTLV-1 associated myelopathy (HAM).

    Science.gov (United States)

    Fukushima, T; Ikeda, T; Uyama, E; Uchino, M; Okabe, H; Ando, M

    1994-10-01

    Auditory and visual cognitive event-related potentials (ERPs) were investigated in 14 patients with HTLV-1 associated myelopathy (HAM) and in 36 normal controls. In the HAM patients, the latencies of P300 and N200 by the auditory tone method were significantly delayed, and N100 by the auditory click method was significantly delayed in latency. No abnormal ERP components were observed with visual methods. While these auditory abnormal ERPs were present in the HAM patients, there was no evidence of visual abnormal ERPs. Abnormal lesions on the white matter were evident at magnetic resonance imaging (MRI) in 6 (75%) of 8 patients. There was no correlation between MRI lesions and the abnormalities of ERPs, but there was a significant correlation between bifrontal index on MRI and P300 amplitudes at Cz and Pz sites by auditory tone method. In one patient, atrophy of bilateral parietal lobes was seen on MRI and P300 latencies delayed using various methods. Therefore, the possibility that electrophysiological cognitive impairment in patients with HAM is related to brain atrophy rather than to white matter lesions requires attention.

  17. Effects of musical training and absolute pitch ability on event-related activity in response to sine tones.

    Science.gov (United States)

    Wayman, J W; Frisina, R D; Walton, J P; Hantz, E C; Crummer, G C

    1992-06-01

    The neural correlates of music perception have received relatively little scientific attention. The neural activity of listeners without musical training (N = 11), highly trained musicians (N = 14), and musicians possessing "absolute pitch" (AP) ability (N = 10) have been measured. Major differences were observed in the P3, an endogenous event-related potential (ERP), which is thought to be a neurophysiological manifestation of working memory processing. The P3 was elicited using the classical "oddball" paradigm with a sine-tone series. Subjects' musical backgrounds were evaluated with a survey questionnaire. AP ability was verified with an objective pitch identification test. The P3 amplitude, latency and wave shape were evaluated along with each subjects' performance score and musical background. The AP subjects showed a significantly smaller P3 amplitude than either the musicians or nonmusicians, which were nearly identical. The P3 latency was shortest for the AP subjects, and was longer for the nonmusicians. Performance scores were uniformly high in all three groups. It is concluded that AP subjects do indeed exhibit P3 ERPs, albeit with smaller amplitudes and shorter latencies. The differences in neural activity between the musicians and AP subjects were not due to musical training, as the AP subjects had similar musical backgrounds to the musician group. It is also concluded that persons with the AP ability may have superior auditory sensitivity at cortical levels and/or use unique neuropsychological strategies when processing tones.

  18. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    Science.gov (United States)

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.

  19. Processing expectancy violations during music performance and perception: an ERP study.

    Science.gov (United States)

    Maidhof, Clemens; Vavatzanidis, Niki; Prinz, Wolfgang; Rieger, Martina; Koelsch, Stefan

    2010-10-01

    Musicians are highly trained motor experts with pronounced associations between musical actions and the corresponding auditory effects. However, the importance of auditory feedback for music performance is controversial, and it is unknown how feedback during music performance is processed. The present study investigated the neural mechanisms underlying the processing of auditory feedback manipulations in pianists. To disentangle effects of action-based and perception-based expectations, we compared feedback manipulations during performance to the mere perception of the same stimulus material. In two experiments, pianists performed bimanually sequences on a piano, while at random positions, the auditory feedback of single notes was manipulated, thereby creating a mismatch between an expected and actually perceived action effect (action condition). In addition, pianists listened to tone sequences containing the same manipulations (perception condition). The manipulations in the perception condition were either task-relevant (Experiment 1) or task-irrelevant (Experiment 2). In action and perception conditions, event-related potentials elicited by manipulated tones showed an early fronto-central negativity around 200 msec, presumably reflecting a feedback ERN/N200, followed by a positive deflection (P3a). The early negativity was more pronounced during the action compared to the perception condition. This shows that during performance, the intention to produce specific auditory effects leads to stronger expectancies than the expectancies built up during music perception.

  20. Reduced event-related current density in the anterior cingulate cortex in schizophrenia.

    Science.gov (United States)

    Mulert, C; Gallinat, J; Pascual-Marqui, R; Dorn, H; Frick, K; Schlattmann, P; Mientus, S; Herrmann, W M; Winterer, G

    2001-04-01

    There is good evidence from neuroanatomic postmortem and functional imaging studies that dysfunction of the anterior cingulate cortex plays a prominent role in the pathophysiology of schizophrenia. So far, no electrophysiological localization study has been performed to investigate this deficit. We investigated 18 drug-free schizophrenic patients and 25 normal subjects with an auditory choice reaction task and measured event-related activity with 19 electrodes. Estimation of the current source density distribution in Talairach space was performed with low-resolution electromagnetic tomography (LORETA). In normals, we could differentiate between an early event-related potential peak of the N1 (90-100 ms) and a later N1 peak (120-130 ms). Subsequent current-density LORETA analysis in Talairach space showed increased activity in the auditory cortex area during the first N1 peak and increased activity in the anterior cingulate gyrus during the second N1 peak. No activation difference was observed in the auditory cortex between normals and patients with schizophrenia. However, schizophrenics showed significantly less anterior cingulate gyrus activation and slowed reaction times. Our results confirm previous findings of an electrical source in the anterior cingulate and an anterior cingulate dysfunction in schizophrenics. Our data also suggest that anterior cingulate function in schizophrenics is disturbed at a relatively early time point in the information-processing stream (100-140 ms poststimulus).

  1. Effects of white noise on event-related potentials in somatosensory Go/No-go paradigms.

    Science.gov (United States)

    Ohbayashi, Wakana; Kakigi, Ryusuke; Nakata, Hiroki

    2017-09-06

    Exposure to auditory white noise has been shown to facilitate human cognitive function. This phenomenon is termed stochastic resonance, and a moderate amount of auditory noise has been suggested to benefit individuals in hypodopaminergic states. The present study investigated the effects of white noise on the N140 and P300 components of event-related potentials in somatosensory Go/No-go paradigms. A Go or No-go stimulus was presented to the second or fifth digit of the left hand, respectively, at the same probability. Participants performed somatosensory Go/No-go paradigms while hearing three different white noise levels (45, 55, and 65 dB conditions). The peak amplitudes of Go-P300 and No-go-P300 in ERP waveforms were significantly larger under 55 dB than 45 and 65 dB conditions. White noise did not affect the peak latency of N140 or P300, or the peak amplitude of N140. Behavioral data for the reaction time, SD of reaction time, and error rates showed the absence of an effect by white noise. This is the first event-related potential study to show that exposure to auditory white noise at 55 dB enhanced the amplitude of P300 during Go/No-go paradigms, reflecting changes in the neural activation of response execution and inhibition processing.

  2. Event-related evoked potentials in chronic respiratory encephalopathy

    Directory of Open Access Journals (Sweden)

    A R Al Tahan

    2010-02-01

    Full Text Available A R Al Tahan1, R Zaidan1, S Jones2, A Husain3, A Mobeireek1, A Bahammam11Department of Medicine, 3Department of Physiology, College of Medicine, King Saud University, Riyadh, Saudi Arabia; 2Department of Neurophysiology, Institute of Neurology, London, UKBackground: Cognitive event-related potential (P300 is an index of cognitive processing time. It was found to be prolonged in dementia, renal, and hepatic encephalopathies, but was not extensively assessed in respiratory failure.Objective: To evaluate P300 changes in patients with respiratory failure, and especially those with mild or subclinical hypoxic–hypercapnic encephalopathy.Methods: Auditory event-related evoked potential P300 latency was measured using an oddball paradigm in patients with respiratory failure due to any cause (partial pressure of oxygen in arterial blood (PO2 should be 75 mm/Hg or less. Apart from blood gases measurement, patients underwent the Mini-Mental State Examination (MMSE. Patient performances were compared with that of matched normal control. Patients were admitted into the study from outpatient clinics and wards at King Khalid University Hospital and Sahara Hospital.Results: Thirty-four patients (12 women, 22 men were admitted to the study. Ages ranged from 19–67 years with a mean of 46.1 years. Respiratory failure was severe or very severe in 11 patients (33%, and mild or moderate in the rest (66%. Mean value for PO2 and partial pressure of carbon dioxide in arterial blood (PCO2 were 63.7 and 45.2 mm/Hg, respectively. pH mean was 7.4 and O2 saturation was 90.7%. P300 latency ranged from 218 to 393 milliseconds, with a mean of 338.4 milliseconds. In comparison with control (309.9 milliseconds, there was a significant difference (P = 0.007. P300 amplitude differences were not significant. No significant difference in MMSE was noted between mild and severe respiratory failure. Results of detailed neuropsychological assessment were clearly abnormal but were

  3. Modulations of the auditory M100 in an imitation task

    NARCIS (Netherlands)

    Franken, M.K.M.; Hagoort, P.; Acheson, D.J.

    2015-01-01

    Models of speech production explain event-related suppression of the auditory cortical response as reflecting a comparison between auditory predictions and feedback. The present MEG study was designed to test two predictions from this framework: (1) whether the reduced auditory response varies as a

  4. Functional MRI/event-related potential study of sensory consonance and dissonance in musicians and nonmusicians.

    Science.gov (United States)

    Minati, Ludovico; Rosazza, Cristina; D'Incerti, Ludovico; Pietrocini, Emanuela; Valentini, Laura; Scaioli, Vidmer; Loveday, Catherine; Bruzzone, Maria Grazia

    2009-01-07

    Pleasurability of individual chords, known as sensory consonance, is widely regarded as physiologically determined and has been shown to be associated with differential activity in the auditory cortex and in several other regions. Here, we present results obtained contrasting isolated four-note chords classified as consonant or dissonant in tonal music. Using event-related functional MRI, consonant chords were found to elicit a larger haemodynamic response in the inferior and middle frontal gyri, premotor cortex and inferior parietal lobule. The effect was right lateralized for nonmusicians and less asymmetric for musicians. Using event-related potentials, the degree of sensory consonance was found to modulate the amplitude of the P1 in both groups and of the N2 in musicians only.

  5. Emoticons in mind: an event-related potential study.

    Science.gov (United States)

    Churches, Owen; Nicholls, Mike; Thiessen, Myra; Kohler, Mark; Keage, Hannah

    2014-01-01

    It is now common practice, in digital communication, to use the character combination ":-)", known as an emoticon, to indicate a smiling face. Although emoticons are readily interpreted as smiling faces, it is unclear whether emoticons trigger face-specific mechanisms or whether separate systems are utilized. A hallmark of face perception is the utilization of regions in the occipitotemporal cortex, which are sensitive to configural processing. We recorded the N170 event-related potential to investigate the way in which emoticons are perceived. Inverting faces produces a larger and later N170 while inverting objects which are perceived featurally rather than configurally reduces the amplitude of the N170. We presented 20 participants with images of upright and inverted faces, emoticons and meaningless strings of characters. Emoticons showed a large amplitude N170 when upright and a decrease in amplitude when inverted, the opposite pattern to that shown by faces. This indicates that when upright, emoticons are processed in occipitotemporal sites similarly to faces due to their familiar configuration. However, the characters which indicate the physiognomic features of emoticons are not recognized by the more laterally placed facial feature detection systems used in processing inverted faces.

  6. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  7. P300 component of event-related potentials in persons with asperger disorder.

    Science.gov (United States)

    Iwanami, Akira; Okajima, Yuka; Ota, Haruhisa; Tani, Masayuki; Yamada, Takashi; Yamagata, Bun; Hashimoto, Ryuichiro; Kanai, Chieko; Takashio, Osamu; Inamoto, Atsuko; Ono, Taisei; Takayama, Yukiko; Kato, Nobumasa

    2014-10-01

    In the present study, we investigated auditory event-related potentials in adults with Asperger disorder and normal controls using an auditory oddball task and a novelty oddball task. Task performance and the latencies of P300 evoked by both target and novel stimuli in the two tasks did not differ between the two groups. Analysis of variance revealed that there was a significant interaction effect between group and electrode site on the mean amplitude of the P300 evoked by novel stimuli, which indicated that there was an altered distribution of the P300 in persons with Asperger disorder. In contrast, there was no significant interaction effect on the mean P300 amplitude elicited by target stimuli. Considering that P300 comprises two main subcomponents, frontal-central-dominant P3a and parietal-dominant P3b, our results suggested that persons with Asperger disorder have enhanced amplitude of P3a, which indicated activated prefrontal function in this task.

  8. Distinguishing shyness and sociability in adults: An event-related electrocortical-neuroendocrine study.

    Science.gov (United States)

    Tang, Alva; Santesso, Diane L; Segalowitz, Sidney J; Schulkin, Jay; Schmidt, Louis A

    2016-09-01

    Shyness and sociability are orthogonal personality dimensions, but little is known about how the two traits are instantiated in the brain and body. Using a 3-stimulus auditory oddball task, we examined whether shyness and sociability were distinguishable on P300 event-related potentials (ERPs) in processing task-relevant, novel, and standard auditory tones in 48 young adults. ERP amplitudes were measured at four midline scalp sites (Fz, FCz, Cz, Pz). We found that shyness, but not sociability, was related to reduced frontal novelty P300 amplitudes and to high emotionality. We also found that low baseline salivary cortisol levels mediated the relation between: (a) high shyness and reduced frontal P300 amplitudes to novel tones, and (b) high shyness and high scores of emotionality. We speculate that low baseline cortisol may serve as a putative mechanism influencing central attentional states of avoidance to threat and novelty and emotional arousal in adults who are shy.

  9. Successful syllable detection in aphasia despite processing impairments as revealed by event-related potentials

    Directory of Open Access Journals (Sweden)

    Becker Frank

    2007-01-01

    Full Text Available Abstract Background The role of impaired sound and speech sound processing for auditory language comprehension deficits in aphasia is unclear. No electrophysiological studies of attended speech sound processing in aphasia have been performed for stimuli that are discriminable even for patients with severe auditory comprehension deficits. Methods Event-related brain potentials (ERPs were used to study speech sound processing in a syllable detection task in aphasia. In an oddball paradigm, the participants had to detect the infrequent target syllable /ta:/ amongst the frequent standard syllable /ba:/. 10 subjects with moderate and 10 subjects with severe auditory comprehension impairment were compared to 11 healthy controls. Results N1 amplitude was reduced indicating impaired primary stimulus analysis; N1 reduction was a predictor for auditory comprehension impairment. N2 attenuation suggests reduced attended stimulus classification and discrimination. However, all aphasic patients were able to discriminate the stimuli almost without errors, and processes related to the target identification (P3 were not significantly reduced. The aphasic subjects might have discriminated the stimuli by purely auditory differences, while the ERP results reveal a reduction of language-related processing which however did not prevent performing the task. Topographic differences between aphasic subgroups and controls indicate compensatory changes in activation. Conclusion Stimulus processing in early time windows (N1, N2 is altered in aphasics with adverse consequences for auditory comprehension of complex language material, while allowing performance of simpler tasks (syllable detection. Compensational patterns of speech sound processing may be activated in syllable detection, but may not be functional in more complex tasks. The degree to which compensational processes can be activated probably varies depending on factors as lesion site, time after injury, and

  10. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings

    Science.gov (United States)

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479

  11. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  12. Synthetic event-related potentials: a computational bridge between neurolinguistic models and experiments.

    Science.gov (United States)

    Barrès, Victor; Simons, Arthur; Arbib, Michael

    2013-01-01

    Our previous work developed Synthetic Brain Imaging to link neural and schema network models of cognition and behavior to PET and fMRI studies of brain function. We here extend this approach to Synthetic Event-Related Potentials (Synthetic ERP). Although the method is of general applicability, we focus on ERP correlates of language processing in the human brain. The method has two components: Phase 1: To generate cortical electro-magnetic source activity from neural or schema network models; and Phase 2: To generate known neurolinguistic ERP data (ERP scalp voltage topographies and waveforms) from putative cortical source distributions and activities within a realistic anatomical model of the human brain and head. To illustrate the challenges of Phase 2 of the methodology, spatiotemporal information from Friederici's 2002 model of auditory language comprehension was used to define cortical regions and time courses of activation for implementation within a forward model of ERP data. The cortical regions from the 2002 model were modeled using atlas-based masks overlaid on the MNI high definition single subject cortical mesh. The electromagnetic contribution of each region was modeled using current dipoles whose position and orientation were constrained by the cortical geometry. In linking neural network computation via EEG forward modeling to empirical results in neurolinguistics, we emphasize the need for neural network models to link their architecture to geometrically sound models of the cortical surface, and the need for conceptual models to refine and adopt brain-atlas based approaches to allow precise brain anchoring of their modules. The detailed analysis of Phase 2 sets the stage for a brief introduction to Phase 1 of the program, including the case for a schema-theoretic approach to language production and perception presented in detail elsewhere. Unlike Dynamic Causal Modeling (DCM) and Bojak's mean field model, Synthetic ERP builds on models of networks

  13. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  14. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  15. Research on characteristics and correlation of sleep architecture and auditory event-related potentials in adult patients with partial epilepsy%成人部分性癫痫患者的睡眠结构与听觉事件相关电位的特征及相关性研究

    Institute of Scientific and Technical Information of China (English)

    林李

    2016-01-01

    目的:探讨成人部分性癫痫患者的睡眠结构与听觉事件相关电位(AERPS)的特征及相关性。方法选取2012年5月至2014年8月本院收治的54例成人部分性癫痫患者作为研究组,另选取54例同期健康体检者作为对照组,均行 AERPS 和睡眠脑电图监测,分析两组对象睡眠参数和 AERPS 参数,并探讨其相关性。结果研究组总睡眠时间、睡眠效率和非快速眼动睡眠(NREM3)+4期占总睡眠时间百分比较对照组低,而 NREM1期、NREM2期和清醒期占总睡眠时间百分比较对照组高,差异均有统计学意义(P<0.05)。研究组潜伏期 P300高于对照组,差异有统计学意义( P <0.05)。研究组患者 P300潜伏期与 NREM3+4期占总睡眠时间百分比(r=-0.452,P<0.05)、睡眠效率(r=-0.413,P<0.05)呈负相关,与其余睡眠参数间无明显相关性。结论成人部分性癫痫患者睡眠结构和 AERPS 均发生异常,其中睡眠参数的变化与 P300潜伏期有负相关性。%Objective To investigate the characteristics and correlation of sleep architecture and auditory e-vent-related potentials (AERPS) in adult patients with partial epilepsy .Methods A total of 54 cases of adult partial epilepsy I our hospital from May 2012 to August 2014 were chosen as the research group and contemporaneous 54 in-dividuals undergoing the healthy physical examination were chosen as the control group .The sleep EEG and AERPS monitoring was performed in the two groups .The sleep parameters and AERPS parameters in the two group were analyzed and their relationship was investigated .Results The total sleep time ,sleep efficiency and the percentages of NREM3 + 4 accounting for total sleep time in the research group were lower than those in the control group ,while the percentages of NREM1 ,NREM2 and wakefulness accounting for the total sleep time were higher than those in the control group ,the

  16. Communication, Listening, Cognitive and Speech Perception Skills in Children with Auditory Processing Disorder (APD) or Specific Language Impairment (SLI)

    Science.gov (United States)

    Ferguson, Melanie A.; Hall, Rebecca L.; Riley, Alison; Moore, David R.

    2011-01-01

    Purpose: Parental reports of communication, listening, and behavior in children receiving a clinical diagnosis of specific language impairment (SLI) or auditory processing disorder (APD) were compared with direct tests of intelligence, memory, language, phonology, literacy, and speech intelligibility. The primary aim was to identify whether there…

  17. Development of Attentional Control of Verbal Auditory Perception from Middle to Late Childhood: Comparisons to Healthy Aging

    Science.gov (United States)

    Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen

    2013-01-01

    Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…

  18. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  19. [Differential effects of attention deficit/hyperactivity disorder subtypes in event-related potentials].

    Science.gov (United States)

    Tamayo-Orrego, Lukas; Osorio Forero, Alejandro; Quintero Giraldo, Lina Paola; Parra Sánchez, José Hernán; Varela, Vilma; Restrepo, Francia

    2015-01-01

    To better understand the neurophysiological substrates in attention deficit/hyperactivity disorder (ADHD), a study was performed on of event-related potentials (ERPs) in Colombian patients with inattentive and combined ADHD. A case-control, cross-sectional study was designed. The sample was composed of 180 subjects between 5 and 15 years of age (mean, 9.25±2.6), from local schools in Manizales. The sample was divided equally in ADHD or control groups and the subjects were paired by age and gender. The diagnosis was made using the DSM-IV-TR criteria, the Conners and WISC-III test, a psychiatric interview (MINIKID), and a medical evaluation. ERPs were recorded in a visual and auditory passive oddball paradigm. Latency and amplitude of N100, N200 and P300 components for common and rare stimuli were used for statistical comparisons. ADHD subjects show differences in the N200 amplitude and P300 latency in the auditory task. The N200 amplitude was reduced in response to visual stimuli. ADHD subjects with combined symptoms show a delayed P300 in response to auditory stimuli, whereas inattentive subjects exhibited differences in the amplitude of N100 and N200. Combined ADHD patients showed longer N100 latency and smaller N200-P300 amplitude compared to inattentive ADHD subjects. The results show differences in the event-related potentials between combined and inattentive ADHD subjects. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  20. [Anesthesia with flunitrazepam/fentanyl and isoflurane/fentanyl. Unconscious perception and mid-latency auditory evoked potentials].

    Science.gov (United States)

    Schwender, D; Kaiser, A; Klasing, S; Faber-Züllig, E; Golling, W; Pöppel, E; Peter, K

    1994-05-01

    There is a high incidence of intraoperative awareness during cardiac surgery. Mid-latency auditory evoked potentials (MLAEP) reflect the primary cortical processing of auditory stimuli. In the present study, we investigated MLAEP and explicit and implicit memory for information presented during cardiac anaesthesia. PATIENTS AND METHODS. Institutional approval and informed consent was obtained in 30 patients scheduled for elective cardiac surgery. Anaesthesia was induced in group I (n = 10) with flunitrazepam/fentanyl (0.01 mg/kg) and maintained with flunitrazepam/fentanyl (1.2 mg/h). The patients in group II (n = 10) received etomidate (0.25 mg/kg) and fentanyl (0.005 mg/kg) for induction and isoflurane (0.6-1.2 vol%)/fentanyl (1.2 mg/h) for maintenance of general anaesthesia. Group III (n = 10) served as a control and patients were anaesthetized as in I or II. After sternotomy an audiotape that included an implicit memory task was presented to the patients in groups I and II. The story of Robinson Crusoe was told, and it was suggested to the patients that they remember Robinson Crusoe when asked what they associated with the word Friday 3-5 days postoperatively. Auditory evoked potentials were recorded awake and during general anaesthesia before and after the audiotape presentation on vertex (positive) and mastoids on both sides (negative). Auditory clicks were presented binaurally at 70 dBnHL at a rate of 9.3 Hz. Using the electrodiagnostic system Pathfinder I (Nicolet), 1000 successive stimulus responses were averaged over a 100 ms poststimulus interval and analyzed off-line. Latencies of the peak V, Na, Pa were measured. V belongs to the brainstem-generated potentials, which demonstrates that auditory stimuli were correctly transduced. Na, Pa are generated in the primary auditory cortex of the temporal lobe and are the electrophysiological correlate of the primary cortical processing of the auditory stimuli. RESULTS. None of the patients had an explicit memory

  1. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept.

  2. Gender modulates the development of theta event related oscillations in adolescents and young adults.

    Science.gov (United States)

    Chorlian, David B; Rangaswamy, Madhavi; Manz, Niklas; Kamarajan, Chella; Pandey, Ashwini K; Edenberg, Howard; Kuperman, Samuel; Porjesz, Bernice

    2015-10-01

    The developmental trajectories of theta band (4-7 Hz) event-related oscillations (EROs), a key neurophysiological constituent of the P3 response, were assessed in 2170 adolescents and young adults ages 12 to 25. The theta EROs occurring in the P3 response, important indicators of neurocognitive function, were elicited during the evaluation of task-relevant target stimuli in visual and auditory oddball tasks. These tasks call upon attentional and working memory resources. Large differences in developmental rates between males and females were found; scalp location and task modality (visual or auditory) differences within males and females were small compared to gender differences. Trajectories of interregional and intermodal correlations between ERO power values exhibited increases with age in both genders, but showed a divergence in development between auditory and visual systems during ages 16 to 21. These results are consistent with previous electrophysiological and imaging studies and provide additional temporal detail about the development of neurophysiological indices of cognitive activity. Since measures of the P3 response has been found to be a useful endophenotypes for the study of a number of clinical and behavioral disorders, studies of its development in adolescents and young adults may illuminate neurophysiological factors contributing to the onset of these conditions.

  3. Silent speechreading in the absence of scanner noise: an event-related fMRI study.

    Science.gov (United States)

    MacSweeney, M; Amaro, E; Calvert, G A; Campbell, R; David, A S; McGuire, P; Williams, S C; Woll, B; Brammer, M J

    2000-06-05

    In a previous study we used functional magnetic resonance imaging (fMRI) to demonstrate activation in auditory cortex during silent speechreading. Since image acquisition during fMRI generates acoustic noise, this pattern of activation could have reflected an interaction between background scanner noise and the visual lip-read stimuli. In this study we employed an event-related fMRI design which allowed us to measure activation during speechreading in the absence of acoustic scanner noise. In the experimental condition, hearing subjects were required to speechread random numbers from a silent speaker. In the control condition subjects watched a static image of the same speaker with mouth closed and were required to subvocally count an intermittent visual cue. A single volume of images was collected to coincide with the estimated peak of the blood oxygen level dependent (BOLD) response to these stimuli across multiple baseline and experimental trials. Silent speechreading led to greater activation in lateral temporal cortex relative to the control condition. This indicates that activation of auditory areas during silent speechreading is not a function of acoustic scanner noise and confirms that silent speechreading engages similar regions of auditory cortex as listening to speech.

  4. Comparative study of three techniques of palatoplasty in patients with cleft of lip and palate via instrumental and auditory-perceptive evaluations

    Directory of Open Access Journals (Sweden)

    Paniagua, Lauren Medeiros

    2010-03-01

    Full Text Available Introduction: Palatoplasty is a surgical procedure that aims at the reconstruction of the soft and/or hard palate. Actually, we dispose of different techniques that look for the bigger stretching of the soft palate joint to the nasofaryngeal wall to contribute in the appropriate operation of the velopharyngeal sphincter. Failure in its closing brings on speech dysfunctions. Objective: To compare the auditory-perceptive' evaluations and instrumental findings in patients with cleft lip and palate operate through three distinctive techniques of palatoplasty. Method: A prospective transversal study of a group of patients with complete unilateral cleft lip and palate. Everybody was subjected to a randomized clinical essay, through distinctive techniques of palatoplasty performed for a single surgeon, about 8 years. In the period of the surgery, the patients were divided in three distinctive groups with 10 participants each one. The present study has evaluates: 10 patients of the Furlow technique, 7 patients of the Veau-Wardill-Kilner+Braithwaite technique and, 9 patients of the Veau-Wardill-Kilner+Braithwaite+Zetaplasty technique; having a total sample of 26 individuals. All the patients were subjected to auditory-perceptive evaluation through speech recording. An instrumental evaluation was also performed through video endoscopy exam. Results: The findings were satisfactory in the three techniques, in other words, the majority of the individuals does not present hyper nasality, compensatory articulatory disturbance and audible nasal air emission. In addition, in the instrumental evaluation, the majority of the individuals of the three techniques of palatoplasty present an appropriate velopharyngeal function. Conclusion: Was not found statistically significant difference between the palatoplasty techniques in both evaluations

  5. Musical training orchestrates coordinated neuroplasticity in auditory brainstem and cortex to counteract age-related declines in categorical vowel perception.

    Science.gov (United States)

    Bidelman, Gavin M; Alain, Claude

    2015-01-21

    Musicianship in early life is associated with pervasive changes in brain function and enhanced speech-language skills. Whether these neuroplastic benefits extend to older individuals more susceptible to cognitive decline, and for whom plasticity is weaker, has yet to be established. Here, we show that musical training offsets declines in auditory brain processing that accompanying normal aging in humans, preserving robust speech recognition late into life. We recorded both brainstem and cortical neuroelectric responses in older adults with and without modest musical training as they classified speech sounds along an acoustic-phonetic continuum. Results reveal higher temporal precision in speech-evoked responses at multiple levels of the auditory system in older musicians who were also better at differentiating phonetic categories. Older musicians also showed a closer correspondence between neural activity and perceptual performance. This suggests that musicianship strengthens brain-behavior coupling in the aging auditory system. Last, "neurometric" functions derived from unsupervised classification of neural activity established that early cortical responses could accurately predict listeners' psychometric speech identification and, more critically, that neurometric profiles were organized more categorically in older musicians. We propose that musicianship offsets age-related declines in speech listening by refining the hierarchical interplay between subcortical/cortical auditory brain representations, allowing more behaviorally relevant information carried within the neural code, and supplying more faithful templates to the brain mechanisms subserving phonetic computations. Our findings imply that robust neuroplasticity conferred by musical training is not restricted by age and may serve as an effective means to bolster speech listening skills that decline across the lifespan. Copyright © 2015 the authors 0270-6474/15/351240-10$15.00/0.

  6. Using neuroimaging to understand the cortical mechanisms of auditory selective attention

    Science.gov (United States)

    Lee, Adrian KC; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G

    2013-01-01

    Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the “cocktail party” problem. PMID:23850664

  7. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  8. Effects of Transdermal Scopolamine on Auditory-Monitoring Performance and Event-Related Potentials.

    Science.gov (United States)

    1992-12-21

    Frumin, M.J., Herekar, V.R., & Jarvik, M.E. (1976). Amnesic properties and actions of diazepam and scopolamine in man. Anesthesiology, 45, 406-412...patients with senile dementia of the alzheimer type and in normal elderly subjects. Journal of Clinical and Experimental Neuropsychology, 13, 691-702. Meador

  9. Theta oscillations accompanying concurrent auditory stream segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Urbán, Gábor; Winkler, István

    2016-08-01

    The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.

  10. ERP evidence that auditory-visual speech facilitates working memory in younger and older adults.

    Science.gov (United States)

    Frtusova, Jana B; Winneke, Axel H; Phillips, Natalie A

    2013-06-01

    Auditory-visual (AV) speech enhances speech perception and facilitates auditory processing, as measured by event-related brain potentials (ERPs). Considering a perspective of shared resources between perceptual and cognitive processes, facilitated speech perception may render more resources available for higher-order functions. This study examined whether AV speech facilitation leads to better working memory (WM) performance in 23 younger and 20 older adults. Participants completed an n-back task (0- to 3-back) under visual-only (V-only), auditory-only (A-only), and AV conditions. The results showed faster responses across all memory loads and improved accuracy in the most demanding conditions (2- and 3-back) during AV compared with unisensory conditions. Older adults benefited from the AV presentation to the same extent as younger adults. WM performance of older adults during the AV presentation did not differ from that of younger adults in the A-only condition, suggesting that an AV presentation can help to counteract some of the age-related WM decline. The ERPs showed a decrease in the auditory N1 amplitude during the AV compared with A-only presentation in older adults, suggesting that the facilitation of perceptual processing becomes especially beneficial with aging. Additionally, the N1 occurred earlier in the AV than in the A-only condition for both age groups. These AV-induced modulations of auditory processing correlated with improvement in certain behavioral and ERP measures of WM. These results support an integrated model between perception and cognition, and suggest that processing speech under AV conditions enhances WM performance of both younger and older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  11. Mental workload measurement: Event-related potentials and ratings of workload and fatigue

    Science.gov (United States)

    Biferno, M. A.

    1985-01-01

    Event-related potentials were elicited when a digitized word representing a pilot's call-sign was presented. This auditory probe was presented during 27 workload conditions in a 3x3x3 design where the following variables were manipulated: short-term load, tracking task difficulty, and time-on-task. Ratings of workload and fatigue were obtained between each trial of a 2.5-hour test. The data of each subject were analyzed individually to determine whether significant correlations existed between subjective ratings and ERP component measures. Results indicated that a significant number of subjects had positive correlations between: (1) ratings of workload and P300 amplitude, (2) ratings of workload and N400 amplitude, and (3) ratings of fatigue and P300 amplitude. These data are the first to show correlations between ratings of workload or fatigue and ERP components thereby reinforcing their validity as measures of mental workload and fatigue.

  12. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2014-06-01

    Full Text Available To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA and voice onset time (VOT differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI study takes advantage of the superior spatial resolution and high sensitivity of ultra high field 7T MRI. Subjects were attentively listening to consonant-vowel syllables with an alveolar or bilabial stop-consonant and either a short or long voice-onset time. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the consonant-vowel syllables. This was however modulated strongest by place of articulation such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale onto the right auditory cortex during the processing of alveolar consonant-vowel syllables. Further, the connectivity result indicated also a directed information flow from the right to the left auditory cortex, and further to the left planum temporale for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right auditory cortex, with the left planum temporale as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the consonant-vowel syllables.

  13. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    OpenAIRE

    Badcock, Nicholas A.; Petroula Mousikou; Yatin Mahajan; Peter de Lissa; Johnson Thie; Genevieve McArthur

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system ha...

  14. The effect of mastication on human cognitive processing: a study using event-related potentials.

    Science.gov (United States)

    Sakamoto, Kiwako; Nakata, Hiroki; Kakigi, Ryusuke

    2009-01-01

    The purpose of the present study was to clarify the effect of mastication on cognitive processing using reaction time (RT) and event-related potentials (ERPs). The two experiments consisted of two conditions, Mastication (chewing gum) and Control (relaxing without chewing gum) in Experiment 1, and Jaw Movement (opening and closing the jaw) and Finger Tapping (tapping the right index finger) in Experiment 2. The subjects performed four sessions of an auditory oddball paradigm. RT and ERPs were recorded in these four sessions, Pre (before chewing), and Post 1, Post 2 and Post 3 (after chewing). In Mastication for RT and the peak latencies of P300 and N100, the values were significantly longer in Pre than in Post 2 or Post 3. By contrast, in Control, Jaw Movement, and Finger Tapping, they were almost identical among sessions or significantly shorter in Pre than in Post 2 or Post 3. Mastication influences cognitive processing time as reflected by RT and the latency of ERP waveforms. This is the first study investigating the effect of mastication on the central nervous system using event-related potentials.

  15. Lithium excessively enhances event related beta oscillations in patients with bipolar disorder.

    Science.gov (United States)

    Atagün, Murat İlhan; Güntekin, Bahar; Tan, Devran; Tülay, Emine Elif; Başar, Erol

    2015-01-01

    Previous resting-state electroencephalography studies have consistently shown that lithium enhances delta and theta oscillations in default mode networks. Cognitive task based networks differ from resting-state networks and this is the first study to investigate effects of lithium on evoked and event-related beta oscillatory responses of patients with bipolar disorder. The study included 16 euthymic patients with bipolar disorder on lithium monotherapy, 22 euthymic medication-free patients with bipolar disorder and 21 healthy participants. The maximum peak-to-peak amplitudes were measured for each subject's averaged beta responses (14-28 Hz) in the 0-300 ms time window. Auditory simple and oddball paradigm were presented to obtain evoked and event-related beta oscillatory responses. There were significant differences in beta oscillatory responses between groups (p=0.010). Repeated measures ANOVA revealed location (p=0.007), laterality X group (p=0.043) and stimulus X location (p=0.013) type effects. Serum lithium levels were correlated with beta responses. The lithium group had higher number of previous episodes, suggesting that patients of the lithium were more severe cases than patients of the medication-free group. Lithium stimulates neuroplastic cascades and beta oscillations become prominent during neuroplastic changes. Excessively enhanced beta oscillatory responses in the lithium-treated patients may be indicative of excessive activation of the neuron groups of the certain cognitive networks and dysfunctional GABAergic modulation during cognitive activity. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Visuo-tactile interactions in the congenitally deaf: A behavioral and event-related potential study

    Directory of Open Access Journals (Sweden)

    Nadine eHauthal

    2015-01-01

    Full Text Available Auditory deprivation is known to be accompanied by alterations in visual processing. Yet not much is known about tactile processing and the interplay of the intact sensory modalities in the deaf. We presented visual, tactile, and visuo-tactile stimuli to congenitally deaf and hearing individuals in a speeded detection task. Analyses of multisensory responses showed a redundant signals effect that was attributable to a coactivation mechanism in both groups, although the redundancy gain was less in the deaf. In hearing but not deaf participants, N200 latencies of somatosensory event-related potentials were modulated by simultaneous visual stimulation. In deaf but not hearing participants, however, there was a modulation of N200 latencies of visual event-related potentials due to simultaneous tactile stimulation. A comparison of unisensory responses between groups revealed larger N200 amplitudes for visual and shorter N200 latencies for tactile stimuli in the deaf. P300 amplitudes in response to both stimuli were larger in deaf participants. The differences in visual and tactile processing between deaf and hearing participants, however, were not reflected in behavior. The electroencephalography (EEG results suggest an asymmetry in visuo-tactile interactions between deaf and hearing individuals. Visuo-tactile enhancements could neither be fully explained by perceptual deficiency nor by inverse effectiveness. Instead, we suggest that results might be explained by a shift in the relative importance of touch and vision in deaf individuals.

  17. Event-related potentials to expectancy violation in musical context

    DEFF Research Database (Denmark)

    Tervaniemi, M; Huotilainen, M; Brattico, E;

    2003-01-01

    The present study addressed neuronal processing of musical tones that violate expectancies primed by auditorily and visually presented musical material. Event-related brain potentials (ERPs) were recorded while the musically trained subjects were presented with short melodies composed for the exp...

  18. 基于听觉感知任务下的脑电驾驶实验研究%Study of Auditory Perception Tasks Based on EEG Driving

    Institute of Scientific and Technical Information of China (English)

    梁静坤; 徐桂芝; 李明钊; 于洪利

    2012-01-01

    The aim of this work is to explore the driving EEG in the traffic information based on the auditory perception by motor imagery. In the simulated whistle - brake environment,subjects control the vehicle by the left and right hand motor imagery,the feature was extracted and classified by CSP and linear regression algorithms and the recognition was 96. 67%. Results; the signals sampled from C3 and C4 all showed that the voltage amplitude caused by the right hand imagery was higher,the difference between the maximum amplitude was 120 μV. Conclusion: compared to traditional perception of the visual task,the auditory one have advantages including (1) higher voltage amplitude can improve the signal extraction,and (2 ) the collection of EEG can optimize access to the traditional two-electrode acquisition mode by using the single one.%探索在听觉感知的交通信息提示下,进行左右手想象运动对车辆进行辅助控制的脑电特征.受试者在模拟的鸣笛和刹车环境中进行左右手想象运动控制车辆的前进和停止,对采集的脑电进行了基于公共空间模式CSP和线性回归算法的特征提取及分类,识别率达96.67%.结果显示:在与左右手想象运动相关的电极C3、C4上,均表现出右手想象任务引起的脑电幅值高于左手任务,最大幅值差可达120 μV.结论:与传统的在视觉感知下进行的左右手想象任务相比,听觉感知任务下的脑电存在着很大优势:(1)两种想象任务引起的脑电幅值差异更大,更有利于信号的提取;(2)脑电的获取可以取代传统的双电极采集方式,选用C3或C4单电极实现.

  19. Context-dependent encoding in the auditory brainstem subserves enhanced speech-in-noise perception in musicians.

    Science.gov (United States)

    Parbery-Clark, A; Strait, D L; Kraus, N

    2011-10-01

    Musical training strengthens speech perception in the presence of background noise. Given that the ability to make use of speech sound regularities, such as pitch, underlies perceptual acuity in challenging listening environments, we asked whether musicians' enhanced speech-in-noise perception is facilitated by increased neural sensitivity to acoustic regularities. To this aim we examined subcortical encoding of the same speech syllable presented in predictable and variable conditions and speech-in-noise perception in 31 musicians and nonmusicians. We anticipated that musicians would demonstrate greater neural enhancement of speech presented in the predictable compared to the variable condition than nonmusicians. Accordingly, musicians demonstrated more robust neural encoding of the fundamental frequency (i.e., pitch) of speech presented in the predictable relative to the variable condition than nonmusicians. The degree of neural enhancement observed to predictable speech correlated with subjects' musical practice histories as well as with their speech-in-noise perceptual abilities. Taken together, our findings suggest that subcortical sensitivity to speech regularities is shaped by musical training and may contribute to musicians' enhanced speech-in-noise perception. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Event-related potentials and cognitive performance in multiple sclerosis patients with fatigue.

    Science.gov (United States)

    Pokryszko-Dragan, Anna; Zagrajek, Mieszko; Slotwinski, Krzysztof; Bilinska, Malgorzata; Gruszka, Ewa; Podemski, Ryszard

    2016-09-01

    The aim of this study was to evaluate event-related potentials (ERP) and cognition in multiple sclerosis (MS) patients with regard to fatigue and disease-related variables. The study comprised 86 MS patients and 40 controls. Fatigue was assessed using the Fatigue Severity Scale (FSS/FSS-5) and the Modified Fatigue Impact Scale (MFIS/MFISmod). N200 and P300 components of auditory ERP were analyzed. Cognition was evaluated by means of Brief Repeatable Battery of Neuropsychological Tests (BRBNT). The results of ERP and BRBNT were compared between non-fatigued, moderately and severely fatigued MS patients and controls. P300 latency was significantly longer in the whole MS group and in the fatigued patients than in the controls. A positive correlation was found between P300 latency and MFIS/MFISmod results, independent from age and MS-related variables. The fatigued patients scored less than non-fatigued ones in tests evaluating memory, visuomotor abilities and attention. Results of these tests correlated significantly with fatigue measures, independently from MS-related variables. Fatigue in MS patients showed significant relationships with impairment within the memory and attention domains. Parameters of auditory ERP, as electrophysiological biomarkers of cognitive performance, were not independently linked to fatigue.

  1. An Event-Related Potential Study of Cross-modal Morphological and Phonological Priming.

    Science.gov (United States)

    Justus, Timothy; Yang, Jennifer; Larsen, Jary; de Mornay Davies, Paul; Swick, Diane

    2009-11-01

    The current work investigated whether differences in phonological overlap between the past- and present-tense forms of regular and irregular verbs can account for the graded neurophysiological effects of verb regularity observed in past-tense priming designs. Event-related potentials were recorded from sixteen healthy participants who performed a lexical-decision task in which past-tense primes immediately preceded present-tense targets. To minimize intra-modal phonological priming effects, cross-modal presentation between auditory primes and visual targets was employed, and results were compared to a companion intra-modal auditory study (Justus, Larsen, de Mornay Davies, & Swick, 2008). For both regular and irregular verbs, faster response times and reduced N400 components were observed for present-tense forms when primed by the corresponding past-tense forms. Although behavioral facilitation was observed with a pseudopast phonological control condition, neither this condition nor an orthographic-phonological control produced significant N400 priming effects. Instead, these two types of priming were associated with a post-lexical anterior negativity (PLAN). Results are discussed with regard to dual- and single-system theories of inflectional morphology, as well as intra- and cross-modal prelexical priming.

  2. Investigation of brain electrophysiological properties among heroin addicts: Quantitative EEG and event-related potentials.

    Science.gov (United States)

    Motlagh, Farid; Ibrahim, Fatimah; Rashid, Rusdi; Seghatoleslam, Tahereh; Habil, Hussain

    2017-08-01

    This study aims to introduce a new approach of a comprehensive paradigm to evaluate brain electrophysiological properties among addicts. Electroencephalographic spectral power as well as amplitudes and latencies of mismatch negativity (MMN), P300, and P600 components were evaluated among 19 male heroin addicts and 19 healthy nonsmoker subjects using a paradigm consisting of three subparadigms, namely (1) digit span Wechsler test, (2) auditory oddball, and (3) visual cue-reactivity oddball paradigms. Task 1 provided auditory P300 and P600 in association with working memory. Task 2 provided auditory P300 as well as small and large deviant MMN event-related potential (ERPs). Finally, task 3 provided visual cue-reactivity P300. Results show that beta power was higher among heroin addicts while delta, theta, and alpha powers were decreased compared with healthy subjects. ERP analysis confirmed the decline of brain-evoked potential amplitudes when compared with healthy subjects, thus indicating a broad neurobiological vulnerability of preattentive and attentional processing including attentional deficits and compromise of discrimination abilities. The prolonged latency of ERPs reflects poor cognitive capacity in the engagement of attention and memory resources. On the other hand, an increase of attention towards the heroin-related stimuli could be concluded from the increase of P300 in the cue-reactivity condition among heroin addicts. Findings suggest that applying this paradigm in addiction studies benefits comprehensive evaluation of neuroelectrophysiological activity among addicts, which can promote a better understanding of drugs' effects on the brain as well as define new neuroelectrophysiological characteristics of addiction properties. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Cognitive deficits following exposure to pneumococcal meningitis: an event-related potential study

    Directory of Open Access Journals (Sweden)

    Kihara Michael

    2012-03-01

    Full Text Available Abstract Background Pneumococcal meningitis (PM is a severe and life-threatening disease that is associated with cognitive impairment including learning difficulties, cognitive slowness, short-term memory deficits and poor academic performance. There are limited data on cognitive outcomes following exposure to PM from Africa mainly due to lack of culturally appropriate tools. We report cognitive processes of exposed children as measured by auditory and visual event-related potentials. Methods Sixty-five children (32 male, mean 8.4 years, SD 3.0 years aged between 4-15 years with a history of PM and an age-matched control group of 93 children (46 male; mean 8.4 years, SD 2.7 years were recruited from a well-demarcated study area in Kilifi. In the present study, both baseline to peak and peak-to-peak amplitude differences are reported. Results Children with a history of pneumococcal meningitis had significantly longer auditory P1 and P3a latencies and smaller P1 amplitudes compared to unexposed children. In the visual paradigm, children with PM seemingly lacked a novelty P3a component around 350 ms where control children had a maximum, and showed a lack of stimulus differentiation at Nc. Further, children with exposure to PM had smaller peak to peak amplitude (N2-P1 compared to unexposed children. Conclusion The results suggest that children with a history of PM process novelty differently than do unexposed children, with slower latencies and reduced or absent components. This pattern suggests poorer auditory attention and/or cognitive slowness and poorer visual attention orienting, possibly due to disruption in the functions of the lateral prefrontal and superior temporal cortices. ERPs may be useful for assessment of the development of perceptual-cognitive functions in post brain-injury in African children by providing an alternate way of assessing cognitive development in patient groups for whom more typical standardized neuropsychological

  4. P300 Event-Related Potentials in Children with Dyslexia

    Science.gov (United States)

    Papagiannopoulou, Eleni A.; Lagopoulos, Jim

    2017-01-01

    To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in…

  5. Event-related potential signatures of perceived and imagined emotional and food real-life photos.

    Science.gov (United States)

    Marmolejo-Ramos, Fernando; Hellemans, Kim; Comeau, Amy; Heenan, Adam; Faulkner, Andrew; Abizaid, Alfonso; D'Angiulli, Amedeo

    2015-06-01

    Although food and affective pictures share similar emotional and motivational characteristics, the relationship between the neuronal responses to these stimuli is unclear. Particularly, it is not known whether perceiving and imagining food and affective stimuli elicit similar event-related potential (ERP) patterns. In this study, two ERP correlates, the early posterior negativity (EPN) and the late positive potential (LPP) for perceived and imagined emotional and food photographs were investigated. Thirteen healthy volunteers were exposed to a set of food photos, as well as unpleasant, pleasant, and neutral photos from the International Affective Picture System. In each trial, participants were first asked to view a photo (perception condition), and then to create a visual mental image of it and to rate its vividness (imagery condition). The results showed that during perception, brain regions corresponding to sensorimotor and parietal motivational (defensive and appetitive) systems were activated to different extents, producing a graded pattern of EPN and LPP responses specific to the photo content - more prominent for unpleasant than pleasant and food content. Also, an EPN signature occurred in both conditions for unpleasant content, suggesting that, compared to food or pleasant content, unpleasant content may be attended to more intensely during perception and may be represented more distinctly during imagery. Finally, compared to LLP activation during perception, as well as imagery and perception of all other content, LPP activation was significantly reduced during imagery of unpleasant photos, suggesting inhibition of unwanted memories. Results are framed within a neurocognitive working model of embodied emotions.

  6. Bilingualism and increased attention to speech: Evidence from event-related potentials.

    Science.gov (United States)

    Kuipers, Jan Rouke; Thierry, Guillaume

    2015-10-01

    A number of studies have shown that from an early age, bilinguals outperform their monolingual peers on executive control tasks. We previously found that bilingual children and adults also display greater attention to unexpected language switches within speech. Here, we investigated the effect of a bilingual upbringing on speech perception in one language. We recorded monolingual and bilingual toddlers' event-related potentials (ERPs) to spoken words preceded by pictures. Words matching the picture prime elicited an early frontal positivity in bilingual participants only, whereas later ERP amplitudes associated with semantic processing did not differ between groups. These results add to the growing body of evidence that bilingualism increases overall attention during speech perception whilst semantic integration is unaffected.

  7. Processing of emotional faces in congenital amusia: An emotional music priming event-related potential study.

    Science.gov (United States)

    Zhishuai, Jin; Hong, Liu; Daxing, Wu; Pin, Zhang; Xuejing, Lu

    2017-01-01

    Congenital amusia is characterized by lifelong impairments in music perception and processing. It is unclear whether pitch detection deficits impact amusic individuals' perception of musical emotion. In the current work, 19 amusics and 21 healthy controls were subjected to electroencephalography (EEG) while being exposed to music excerpts and emotional faces. We assessed each individual's ability to discriminate positive- and negative-valenced emotional faces and analyzed electrophysiological indices, in the form of event-related potentials (ERPs) recorded at 32 sites, following exposure to emotionally positive or negative music excerpts. We observed smaller N2 amplitudes in response to facial expressions in the amusia group than in the control group, suggesting that amusics were less affected by the musical stimuli. The late-positive component (LPC) in amusics was similar to that in controls. Our results suggest that the neurocognitive deficit characteristic of congenital amusia is fundamentally an impairment in musical information processing rather than an impairment in emotional processing.

  8. Measurement of event-related potentials and placebo

    Directory of Open Access Journals (Sweden)

    Sovilj Platon

    2014-01-01

    Full Text Available ERP is common abbreviation for event-related brain potentials, which are measured and used in clinical practice as well as in research practice. Contemporary studies of placebo effect are often based on functional neuromagnetic resonance (fMRI, positron emission tomography (PET, and event related potentials (ERP. This paper considers an ERP instrumentation system used in experimental researches of placebo effect. This instrumentation system can be divided into four modules: electrodes and cables, conditioning module, digital measurement module, and PC module for stimulations, presentations, acquisition and data processing. The experimental oddball paradigm is supported by the software of the instrumentation. [Projekat Ministarstva nauke Republike Srbije, br. TR32019 and Provincial Secretariat for Science and Technological Development of Autonomous Province of Vojvodina (Republic of Serbia under research grant No. 114-451-2723

  9. How is sentence processing affected by external semantic and syntactic information? Evidence from event-related potentials.

    Directory of Open Access Journals (Sweden)

    Annekathrin Schacht

    Full Text Available BACKGROUND: A crucial question for understanding sentence comprehension is the openness of syntactic and semantic processes for other sources of information. Using event-related potentials in a dual task paradigm, we had previously found that sentence processing takes into consideration task relevant sentence-external semantic but not syntactic information. In that study, internal and external information both varied within the same linguistic domain-either semantic or syntactic. Here we investigated whether across-domain sentence-external information would impact within-sentence processing. METHODOLOGY: In one condition, adjectives within visually presented sentences of the structure [Det]-[Noun]-[Adjective]-[Verb] were semantically correct or incorrect. Simultaneously with the noun, auditory adjectives were presented that morphosyntactically matched or mismatched the visual adjectives with respect to gender. FINDINGS: As expected, semantic violations within the sentence elicited N400 and P600 components in the ERP. However, these components were not modulated by syntactic matching of the sentence-external auditory adjective. In a second condition, syntactic within-sentence correctness-variations were combined with semantic matching variations between the auditory and the visual adjective. Here, syntactic within-sentence violations elicited a LAN and a P600 that did not interact with semantic matching of the auditory adjective. However, semantic mismatching of the latter elicited a frontocentral positivity, presumably related to an increase in discourse level complexity. CONCLUSION: The current findings underscore the open versus algorithmic nature of semantic and syntactic processing, respectively, during sentence comprehension.

  10. Activation of Heschl's gyrus during auditory hallucinations

    National Research Council Canada - National Science Library

    Dierks, T; Linden, D E; Jandl, M; Formisano, E; Goebel, R; Lanfermann, H; Singer, W

    1999-01-01

    Apart from being a common feature of mental illness, auditory hallucinations provide an intriguing model for the study of internally generated sensory perceptions that are attributed to external sources...

  11. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  12. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity.

    Science.gov (United States)

    Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija

    2014-09-01

    Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. An Efficient Method for Mining Event-Related Potential Patterns

    Directory of Open Access Journals (Sweden)

    Seyed Aliakbar Mousavi

    2011-11-01

    Full Text Available In the present paper, we propose a Neuroelectromagnetic Ontology Framework (NOF for mining Event-related Potentials (ERP patterns as well as the process. The aim for this research is to develop an infrastructure for mining, analysis and sharing the ERP domain ontologies. The outcome of this research is a Neuroelectromagnetic knowledge-based system. The framework has 5 stages: 1 Data pre-processing and preparation; 2 Data mining application; 3 Rule Comparison and Evaluation; 4 Association rules Post-processing 5 Domain Ontologies. In 5th stage a new set of hidden rules can be discovered base on comparing association rules by domain ontologies and expert rules.

  14. Event-related potential correlates of emotional orthographic priming.

    Science.gov (United States)

    Faïta-Aïnseba, Frédérique; Gobin, Pamela; Bouaffre, Sarah; Mathey, Stéphanie

    2012-09-12

    Event-related potentials were used to explore the underlying mechanisms of masked orthographic priming and to determine whether the emotional valence of a word neighbor prime affects target processing in a lexical decision task. The results showed that the N200 and N400 amplitudes were modified by orthographic priming, which also varied with the emotional valence of the neighbors. These findings provide new evidence that the N400 component is sensitive to orthographic priming and further suggest that the affective content of the neighbor influences target word processing.

  15. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  16. An Efficient Method for Mining Event-Related Potential Patterns

    CERN Document Server

    Mousavi, Seyed Aliakbar; Mohamed, Hasimah Hj; Alomari, Saleh Ali

    2012-01-01

    In the present paper, we propose a Neuroelectromagnetic Ontology Framework (NOF) for mining Event-related Potentials (ERP) patterns as well as the process. The aim for this research is to develop an infrastructure for mining, analysis and sharing the ERP domain ontologies. The outcome of this research is a Neuroelectromagnetic knowledge-based system. The framework has 5 stages: 1) Data pre-processing and preparation; 2) Data mining application; 3) Rule Comparison and Evaluation; 4) Association rules Post-processing 5) Domain Ontologies. In 5th stage a new set of hidden rules can be discovered base on comparing association rules by domain ontologies and expert rules.

  17. Differential Allocation of Attention During Speech Perception in Monolingual and Bilingual Listeners.

    Science.gov (United States)

    Astheimer, Lori B; Berkes, Matthias; Bialystok, Ellen

    Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times in a continuous narrative: concurrent with word onset, 100 ms before or after onset, and at random control times. Greater attention was indexed by an increase in the amplitude of the early negativity (N1). Among monolinguals, probes presented after word onsets elicited a larger N1 than control probes, replicating previous studies. For bilinguals, there was no N1 difference for probes at different times around word onsets, indicating less specificity in allocation of attention. These results suggest that bilingualism shapes attentional strategies during English speech comprehension.

  18. Differential Allocation of Attention During Speech Perception in Monolingual and Bilingual Listeners

    Science.gov (United States)

    Astheimer, Lori B.; Berkes, Matthias; Bialystok, Ellen

    2016-01-01

    Attention is required during speech perception to focus processing resources on critical information. Previous research has shown that bilingualism modifies attentional processing in nonverbal domains. The current study used event-related potentials (ERPs) to determine whether bilingualism also modifies auditory attention during speech perception. We measured attention to word onsets in spoken English for monolinguals and Chinese-English bilinguals. Auditory probes were inserted at four times in a continuous narrative: concurrent with word onset, 100 ms before or after onset, and at random control times. Greater attention was indexed by an increase in the amplitude of the early negativity (N1). Among monolinguals, probes presented after word onsets elicited a larger N1 than control probes, replicating previous studies. For bilinguals, there was no N1 difference for probes at different times around word onsets, indicating less specificity in allocation of attention. These results suggest that bilingualism shapes attentional strategies during English speech comprehension. PMID:27110579

  19. Preferred pre-stimulus EEG states affect cognitive event-related potentials.

    Science.gov (United States)

    Barry, Robert J

    2013-01-01

    Current views of the genesis of the event-related potential (ERP) emphasize the contribution of ongoing oscillations - the ongoing electroencephalogram (EEG) is recognized as much more than "background noise" to be removed by response averaging to find the ERP. Early work from Başar's group noted that repetitive stimuli led to selective phase re-ordering of activity in the delta and alpha bands, such that enhanced brain negativity occurred at the time of the regular stimulus. Other work related negativity in alpha activity at stimulus onset to improved reaction times and ERP enhancements. These findings led us to begin a program of brain dynamics studies exploring pre-stimulus EEG phase states, their preferential occurrence in paradigms with regularly presented stimuli, and their relation to ERP outcomes. In particular, with very narrow EEG bands, we have repeatedly found that certain phase states preferentially occur at stimulus onset, implying ongoing phase re-ordering driven by stimulus occurrence. Effects are weakened with slightly varying inter-stimulus intervals, but still occur reliably. Further, these preferential phase states are functionally effective in relation to the ERP correlates of efficient stimulus processing. Preferential phase occurrence and their effects were originally reported in auditory oddball tasks, using narrow EEG bands derived by digital filtering. A recent study is presented illustrating generalization of the phenomenon in the auditory Go/NoGo task, using narrow bands derived by FFT techniques. Our current work is extending this research in normal children (to provide a comparative context for research in children with AD/HD), and well-functioning elderly (to provide a context for future work in relation to Alzheimer's disease).

  20. Multivariate evaluation of brain function by measuring regional cerebral blood flow and event-related potentials

    Energy Technology Data Exchange (ETDEWEB)

    Koga, Yoshihiko; Mochida, Masahiko; Shutara, Yoshikazu; Nakagawa, Kazumi [Kyorin Univ., Mitaka, Tokyo (Japan). School of Medicine; Nagata, Ken

    1998-07-01

    To measure the effect of events on human cognitive function, effects of odors by measurement regional cerebral blood flow (rCBF) and P300 were evaluated during the auditory odd-ball exercise. PET showed the increase in rCBF on the right hemisphere of the brain by coffee aroma. rCBF was measured by PET in 9 of right-handed healthy adults men, and P300 was by event-related potential (ERP) in each sex of 20 right-handed healthy adults. ERP showed the difference of the P300 amplitude between men and women, and showed the tendency, by odors except the lavender oil, that women had higher in the P300 amplitude than men. These results suggest the presence of effects on the cognitive function through emotional actions. Next, the relationship between rCBF and ERP were evaluated. The subjects were 9 of the right-handed healthy adults (average: 25.6{+-}3.4 years old). rCBF by PET and P300 amplitude by ERP were simultaneously recorded during the auditory odd-ball exercise using the tone-burst method (2 kHz of the low frequency aimed stimuli and 1 kHz of the high frequency non-aimed stimuli). The rCBF value was the highest at the transverse gyrus of Heschl and the lowest at the piriform cortex among 24 regions of interest (ROI) from both sides. The difference of P300 peak latent time among ROI was almost the same. The brain waves from Cz and Pz were similar and the average amplitude was highest at Pz. We found the high correlation in the right piriform cortex (Fz), and right (Fz, Cz) and left (Cz, Pz) transverse gyrus of Heschl between the P300 amplitude and rCBF. (K.H.)

  1. Fingers Phrase Music Differently: Trial-to-Trial Variability in Piano Scale Playing and Auditory Perception Reveal Motor Chunking.

    Science.gov (United States)

    van Vugt, Floris Tijmen; Jabusch, Hans-Christian; Altenmüller, Eckart

    2012-01-01

    We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i) systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii) non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations reveal that the two-octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioral evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.

  2. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  3. Event-related potentials dissociate perceptual from response-related age effects in visual search.

    Science.gov (United States)

    Wiegand, Iris; Finke, Kathrin; Müller, Hermann J; Töllner, Thomas

    2013-03-01

    Attentional decline plays a major role in cognitive changes with aging. However, which specific aspects of attention contribute to this decline is as yet little understood. To identify the contributions of various potential sources of age decrements in visual search, we combined response time measures with lateralized event-related potentials of younger and older adults performing a compound-search task, in which the target-defining dimension of a pop-out target (color/shape) and the response-critical target feature (vertical/horizontal stripes) varied independently across trials. Slower responses in older participants were associated with age differences in all analyzed event-related potentials from perception to response, indicating that behavioral slowing originates from multiple stages within the information-processing stream. Furthermore, analyses of carry-over effects from one trial to the next revealed repetition facilitation of the target-defining dimension and of the motor response-originating from preattentive perceptual and motor execution stages, respectively-to be independent of age. Critically, we demonstrated specific age deficits on intermediate processing stages when intertrial changes required more executively controlled processes, such as flexible stimulus-response (re-)mapping across trials.

  4. Facing a real person: an event-related potential study.

    Science.gov (United States)

    Pönkänen, Laura M; Hietanen, Jari K; Peltola, Mikko J; Kauppinen, Pasi K; Haapalainen, Antti; Leppänen, Jukka M

    2008-03-05

    Although faces are typically perceived in the context of human interaction, face processing is commonly studied by displaying faces on a computer screen. This study on event-related potential examined whether the processing of faces differs depending on whether participants are viewing faces live or on a computer screen. In both the conditions, the participants were shown a real face, a dummy face, and a control object. N170 and early posterior negativity discriminated between faces and control object in both the conditions. Interestingly, early posterior negativity differentiated between the real face and the dummy face only in the live condition. The results indicate that a live face, as a potentially interacting stimulus, is processed differently than an inanimate face already at the early processing stages.

  5. Syntactic processing with aging: an event-related potential study.

    Science.gov (United States)

    Kemmer, Laura; Coulson, Seana; De Ochoa, Esmeralda; Kutas, Marta

    2004-05-01

    To assess age-related changes in simple syntactic processing with normal aging, event-related brain potentials (ERPs) elicited by grammatical number violations as individuals read sentences for comprehension were analyzed. Violations were found to elicit a P600 of equal amplitude and latency regardless of an individual's age. Instead, advancing age was associated with a change in the scalp distribution of the P600 effect, being less asymmetric and more frontal (though still with a parietal maximum) in older than younger adults. Our results thus show that the brain's response to simple syntactic violations, unlike those reported for simple binary categorizations and simple semantic violations, is neither slowed nor diminished in amplitude by age. At the same time, the brain's processing of these grammatical number violations did engage at least somewhat different brain regions as a function of age, suggesting a qualitative change rather than any simple quantitative change in speed of processing.

  6. Event-Related Potentials and Emotion Processing in Child Psychopathology

    Directory of Open Access Journals (Sweden)

    Georgia eChronaki

    2016-04-01

    Full Text Available In recent years there has been increasing interest in the neural mechanisms underlying altered emotional processes in children and adolescents with psychopathology. This review provides a brief overview of the most up-to-date findings in the field of Event-Related Potentials (ERPs to facial and vocal emotional expressions in the most common child psychopathological conditions. In regards to externalising behaviour (i.e. ADHD, CD, ERP studies show enhanced early components to anger, reflecting enhanced sensory processing, followed by reductions in later components to anger, reflecting reduced cognitive-evaluative processing. In regards to internalising behaviour, research supports models of increased processing of threat stimuli especially at later more elaborate and effortful stages. Finally, in autism spectrum disorders abnormalities have been observed at early visual-perceptual stages of processing. An affective neuroscience framework for understanding child psychopathology can be valuable in elucidating underlying mechanisms and inform preventive intervention.

  7. Retinotopic mapping of visual event-related potentials.

    Science.gov (United States)

    Capilla, Almudena; Melcón, María; Kessel, Dominique; Calderón, Rosbén; Pazo-Álvarez, Paula; Carretié, Luis

    2016-07-01

    Visual stimulation is frequently employed in electroencephalographic (EEG) research. However, despite its widespread use, no studies have thoroughly evaluated how the morphology of the visual event-related potentials (ERPs) varies according to the spatial location of stimuli. Hence, the purpose of this study was to perform a detailed retinotopic mapping of visual ERPs. We recorded EEG activity while participants were visually stimulated with 60 pattern-reversing checkerboards placed at different polar angles and eccentricities. Our results show five pattern-reversal ERP components. C1 and C2 components inverted polarity between the upper and lower hemifields. P1 and N1 showed higher amplitudes and shorter latencies to stimuli located in the contralateral lower quadrant. In contrast, P2 amplitude was enhanced and its latency was reduced by stimuli presented in the periphery of the upper hemifield. The retinotopic maps presented here could serve as a guide for selecting optimal visuo-spatial locations in future ERP studies.

  8. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Science.gov (United States)

    Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E

    2011-04-29

    In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  9. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  10. Increased psychophysiological parameters of attention in non-psychotic individuals with auditory verbal hallucinations

    DEFF Research Database (Denmark)

    van Lutterveld, Remko; Oranje, Bob; Abramovic, Lucija;

    2010-01-01

    OBJECTIVE: Schizophrenia is associated with aberrant event-related potentials (ERPs) such as reductions in P300, processing negativity and mismatch negativity amplitudes. These deficits may be related to the propensity of schizophrenia patients to experience auditory verbal hallucinations (AVH...

  11. Increased psychophysiological parameters of attention in non-psychotic individuals with auditory verbal hallucinations

    DEFF Research Database (Denmark)

    van Lutterveld, Remko; Oranje, Bob; Abramovic, Lucija;

    2010-01-01

    OBJECTIVE: Schizophrenia is associated with aberrant event-related potentials (ERPs) such as reductions in P300, processing negativity and mismatch negativity amplitudes. These deficits may be related to the propensity of schizophrenia patients to experience auditory verbal hallucinations (AVH). ...

  12. Probabilistic delay differential equation modeling of event-related potentials.

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger

    2016-08-01

    "Dynamic causal models" (DCMs) are a promising approach in the analysis of functional neuroimaging data due to their biophysical interpretability and their consolidation of functional-segregative and functional-integrative propositions. In this theoretical note we are concerned with the DCM framework for electroencephalographically recorded event-related potentials (ERP-DCM). Intuitively, ERP-DCM combines deterministic dynamical neural mass models with dipole-based EEG forward models to describe the event-related scalp potential time-series over the entire electrode space. Since its inception, ERP-DCM has been successfully employed to capture the neural underpinnings of a wide range of neurocognitive phenomena. However, in spite of its empirical popularity, the technical literature on ERP-DCM remains somewhat patchy. A number of previous communications have detailed certain aspects of the approach, but no unified and coherent documentation exists. With this technical note, we aim to close this gap and to increase the technical accessibility of ERP-DCM. Specifically, this note makes the following novel contributions: firstly, we provide a unified and coherent review of the mathematical machinery of the latent and forward models constituting ERP-DCM by formulating the approach as a probabilistic latent delay differential equation model. Secondly, we emphasize the probabilistic nature of the model and its variational Bayesian inversion scheme by explicitly deriving the variational free energy function in terms of both the likelihood expectation and variance parameters. Thirdly, we detail and validate the estimation of the model with a special focus on the explicit form of the variational free energy function and introduce a conventional nonlinear optimization scheme for its maximization. Finally, we identify and discuss a number of computational issues which may be addressed in the future development of the approach.

  13. Event-related potentials dissociate perceptual from response-related age effects in visual search

    DEFF Research Database (Denmark)

    Wiegand, Iris; Müller, Hermann J.; Finke, Kathrin

    2013-01-01

    Attentional decline plays a major role in cognitive changes with aging. However, which specific aspects of attention contribute to this decline is as yet little understood. To identify the contributions of various potential sources of age decrements in visual search, we combined response time...... responses in older participants were associated with age differences in all analyzed event-related potentials from perception to response, indicating that behavioral slowing originates from multiple stages within the information-processing stream. Furthermore, analyses of carry-over effects from one trial...... to the next revealed repetition facilitation of the target-defining dimension and of the motor response—originating from preattentive perceptual and motor execution stages, respectively—to be independent of age. Critically, we demonstrated specific age deficits on intermediate processing stages when...

  14. Selective attention to orientation and closure: An event-related potential study

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Event-related potentials (ERPs) were recorded when the subjects attended selectively to stimuli in one visual field and responded to the targets including designated feature (orientation or closure) value. Attention to spatial location elicited enlarged P1 and N1 at posterior electrodes contralateral to the stimulus location, whereas selection to orientation or closure elicited selection negativity (SN) and a late negative component (LNC). The selection of spatial location was prior to the selection of orientation or closure. SN was elicited only by the stimuli in the attended visual field, suggesting that the selection of orientation and closure are contingent on the prior selection of location. Moreover, the onset latency of SN was earlier for closure selection than for orientation selection, indicating that the processing of closure occurred earlier than the processing of orientation. The results are consistent with the early-selection theories of attention and provide psycho-physiological evidence for the topology theory of visual perception.

  15. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  16. Differing Event-Related Patterns of Gamma-Band Power in Brain Waves of Fast- and Slow-Reacting Subjects

    Science.gov (United States)

    1994-05-01

    Wilhelm Wundt proposed that there are two types of subjects in sim- ple RT experiments: fast-reacting subjects, who respond before they fully...quickly as possible to auditory stimuli. This result appears to confirm long-standing speculations of Wundt that fast- and slow-reacting subjects...accord with the hypothesis of Wundt and others that slower ("sensorial") responders wait to fully perceive a stimulus and then react to their perception

  17. Switching of auditory attention in "cocktail-party" listening: ERP evidence of cueing effects in younger and older adults.

    Science.gov (United States)

    Getzmann, Stephan; Jasny, Julian; Falkenstein, Michael

    2017-02-01

    Verbal communication in a "cocktail-party situation" is a major challenge for the auditory system. In particular, changes in target speaker usually result in declined speech perception. Here, we investigated whether speech cues indicating a subsequent change in target speaker reduce the costs of switching in younger and older adults. We employed event-related potential (ERP) measures and a speech perception task, in which sequences of short words were simultaneously presented by four speakers. Changes in target speaker were either unpredictable or semantically cued by a word within the target stream. Cued changes resulted in a less decreased performance than uncued changes in both age groups. The ERP analysis revealed shorter latencies in the change-related N400 and late positive complex (LPC) after cued changes, suggesting an acceleration in context updating and attention switching. Thus, both younger and older listeners used semantic cues to prepare changes in speaker setting.

  18. Event-related potentials study on cross-modal discrimination of Chinese characters

    Institute of Scientific and Technical Information of China (English)

    罗跃嘉; 魏景汉

    1999-01-01

    Event-related potentials (ERPs) were measured in 15 normal young subjects (18—22 years old) using the "cross-modal and delayed response" paradigm, which is able to improve inattention purity. The stimuli consisted of written and spoken single Chinese characters. The presentation probability of standard stimuli was 82.5% and that of deviant stimuli was 17.5%. The attention components were obtained by subtracting the ERPs of inattention condition from those of attention condition. The results of the N1 scalp distribution demonstrated a cross-modal difference. This result is in contrast to studies with non-verbal as well as with English verbal stimuli. This probably reflected the brain mechanism feature of Chinese language processing. The processing location of attention was varied along with verbal/non-verbal stimuli, auditory/visual modalities and standard/deviant stimuli, and thus it has plasticity. The early attention effects occurred before the exogenous components, and thus provided evidence support

  19. Event-related brain potentials to sound omissions differ in musicians and non-musicians.

    Science.gov (United States)

    Rüsseler, J; Altenmüller, E; Nager, W; Kohlmetz, C; Münte, T F

    2001-07-27

    The mismatch negativity (MMN) component of the auditory event-related brain potential reflects the automatic detection of sound change. MMN to occasionally omitted sounds in a tone series can be used to investigate the time course of temporal integration in the acoustic system. We used MMN to study differences in temporal integration in musicians and non-musicians. In experiment 1, occasionally omitted 'sounds' in an otherwise regular tone series evoked a reliable MMN at interstimulus intervals (SOAs) of 100, 120, 180 and 220 ms in musicians. In non-musicians, MMN was smaller/absent in the 180 and 220 ms SOAs, respectively. In experiment 2, deviance of a tone was induced by presenting tones at a shorter SOA (100 or 130 ms) compared to the standard stimulus (150 ms). Musicians showed a reliable MMN for both deviant SOAs whereas non-musicians showed an MMN only for tones presented 50 ms prior to a standard tone (SOA 100 ms). These results indicate that the temporal window of integration seems to be longer and more precise in musicians compared to musical laypersons and that long-term training is reflected in changes in neural activity.

  20. Multisensory integration and attention in autism spectrum disorder: evidence from event-related potentials.

    Directory of Open Access Journals (Sweden)

    Maurice J C M Magnée

    Full Text Available Successful integration of various simultaneously perceived perceptual signals is crucial for social behavior. Recent findings indicate that this multisensory integration (MSI can be modulated by attention. Theories of Autism Spectrum Disorders (ASDs suggest that MSI is affected in this population while it remains unclear to what extent this is related to impairments in attentional capacity. In the present study Event-related potentials (ERPs following emotionally congruent and incongruent face-voice pairs were measured in 23 high-functioning, adult ASD individuals and 24 age- and IQ-matched controls. MSI was studied while the attention of the participants was manipulated. ERPs were measured at typical auditory and visual processing peaks, namely, P2 and N170. While controls showed MSI during divided attention and easy selective attention tasks, individuals with ASD showed MSI during easy selective attention tasks only. It was concluded that individuals with ASD are able to process multisensory emotional stimuli, but this is differently modulated by attention mechanisms in these participants, especially those associated with divided attention. This atypical interaction between attention and MSI is also relevant to treatment strategies, with training of multisensory attentional control possibly being more beneficial than conventional sensory integration therapy.

  1. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials

    Science.gov (United States)

    2014-01-01

    Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900

  2. Cognitive processing in non-communicative patients: what can event-related potentials tell us?

    Directory of Open Access Journals (Sweden)

    Zulay Rosario Lugo

    2016-11-01

    Full Text Available Event-related potentials (ERP have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS. Eleven chronic LIS patients and ten healthy subjects (HS listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds and then in an active condition (counting the deviant tones. Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and 5 of 7 in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients.

  3. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  4. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  5. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  6. Establishment and application of experimental method of a new generation of event related potential cognitive function assessment and monitoring system---clinical application of auditory brainstem response%新一代事件相关电位认知功能评定和监测系统的实验方法建立--脑干听觉反应的临床应用

    Institute of Scientific and Technical Information of China (English)

    任清涛; 刘情情; 张勤峰; 宗文斌; 陈兴时; 张群峰; 茅顺明; 杨武庆; 刘群

    2014-01-01

    Objective To investigate the characteristics of auditory brainstem response ( ABR) in patients with depression disorder and schizophrenia . Methods The ABR measurement were carried out in 43 patients with depressive disorders ( the depression group ) , 43 patients with schizophrenias ( the schizophrenia group ) , and 50 normal controls ( the control group ) by using the NTS-2000 type ERP instrument . Results The differences between three groups in the absolute latency wave V ( Pz brain area ) , the absolute amplitude wave Ⅲ ( Pz brain area ) and wave V ( Pz brain area ) were significant ( all P<0.01 ) .Compared with that in the control group and the depression group , the schizophrenia group had significantly delayed the absolute latency wave V ( Pz brain area ) ( all P<0.01 ) as well as significantly decreased absolute wave amplitude Ⅲ( Pz brain area ) and absolute wave Ⅴ( Pz brain area ) . Conclusion ABR is of value for clinical differentiate the depression disorder and schizophrenia .%目的:探讨抑郁症患者和精神分裂症患者的脑干听觉反应( ABR)的特点。方法应用NTS-2000型ERP仪及Click短声刺激,测查43例抑郁症(抑郁症组)、43例精神分裂症(精神分裂症组)和50名健康成人对照(对照组)的ABR。结果抑郁症组、精神分裂症组及对照组在绝对潜伏期波Ⅴ( Pz脑区)、绝对波幅波Ⅲ( Pz脑区)、波Ⅴ( Pz脑区)上的差异有显著统计学意义(P均<0.01)。与对照组和抑郁症组相比,精神分裂症组的绝对潜伏期波Ⅴ(Pz脑区)显著延迟(P均<0.01),绝对波幅波Ⅲ( Pz脑区)和绝对波幅波Ⅴ( Pz脑区)亦显著降低。结论 ABR对临床鉴别抑郁症和精神分裂症有参考价值。

  7. Auditory processing in autism spectrum disorder

    DEFF Research Database (Denmark)

    Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher

    2017-01-01

    Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017. © 2017 International Society for Autism...

  8. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  9. Agency attribution: event-related potentials and outcome monitoring.

    Science.gov (United States)

    Bednark, Jeffery G; Franz, Elizabeth A

    2014-04-01

    Knowledge about the effects of our actions is an underlying feature of voluntary behavior. Given the importance of identifying the outcomes of our actions, it has been proposed that the sensory outcomes of self-made actions are inherently different from those of externally caused outcomes. Thus, the outcomes of self-made actions are likely to be more motivationally significant for an agent. We used event-related potentials to investigate the relationship between the perceived motivational significance of an outcome and the attribution of agency in the presence of others. In our experiment, we assessed agency attribution in the presence of another agent by varying the degree of contiguity between participants' self-made actions and the sensory outcome. Specifically, we assessed the feedback correct-related positivity (fCRP) and the novelty P3 measures of an outcome's motivational significance and unexpectedness, respectively. Results revealed that both the fCRP and participants' agency attributions were significantly influenced by action-outcome contiguity. However, when action-outcome contiguity was ambiguous, novelty P3 amplitude was a reliable indicator of agency attribution. Prior agency attributions were also found to influence attribution in trials with ambiguous and low action-outcome contiguity. Participants' use of multiple cues to determine agency is consistent with the cue integration theory of agency. In addition to these novel findings, this study supports growing evidence suggesting that reinforcement processes play a significant role in the sense of agency.

  10. What event-related potentials (ERPs) bring to social neuroscience?

    Science.gov (United States)

    Ibanez, Agustin; Melloni, Margherita; Huepe, David; Helgiu, Elena; Rivera-Rei, Alvaro; Canales-Johnson, Andrés; Baker, Phil; Moya, Alvaro

    2012-01-01

    Social cognitive neuroscience is a recent interdisciplinary field that studies the neural basis of the social mind. Event-related potentials (ERPs) provide precise information about the time dynamics of the brain. In this study, we assess the role of ERPs in cognitive neuroscience, particularly in the emerging area of social neuroscience. First, we briefly introduce the technique of ERPs. Subsequently, we describe several ERP components (P1, N1, N170, vertex positive potential, early posterior negativity, N2, P2, P3, N400, N400-like, late positive complex, late positive potential, P600, error-related negativity, feedback error-related negativity, contingent negative variation, readiness potential, lateralized readiness potential, motor potential, re-afferent potential) that assess perceptual, cognitive, and motor processing. Then, we introduce ERP studies in social neuroscience on contextual effects on speech, emotional processing, empathy, and decision making. We provide an outline of ERPs' relevance and applications in the field of social cognitive neuroscience. We also introduce important methodological issues that extend classical ERP research, such as intracranial recordings (iERP) and source location in dense arrays and simultaneous functional magnetic resonance imaging recordings. Further, this review discusses possible caveats of the ERP question assessment on neuroanatomical areas, biophysical origin, and methodological problems, and their relevance to explanatory pluralism and multilevel, contextual, and situated approaches to social neuroscience.

  11. Event-Related Oscillations in Alcoholism Research: A Review.

    Science.gov (United States)

    Pandey, Ashwini K; Kamarajan, Chella; Rangaswamy, Madhavi; Porjesz, Bernice

    2012-01-12

    Alcohol dependence is characterized as a multi-factorial disorder caused by a complex interaction between genetic and environmental liabilities across development. A variety of neurocognitive deficits/dysfunctions involving impairments in different brain regions and/or neural circuitries have been associated with chronic alcoholism, as well as with a predisposition to develop alcoholism. Several neurobiological and neurobehavioral approaches and methods of analyses have been used to understand the nature of these neurocognitive impairments/deficits in alcoholism. In the present review, we have examined relatively novel methods of analyses of the brain signals that are collectively referred to as event-related oscillations (EROs) and show promise to further our understanding of human brain dynamics while performing various tasks. These new measures of dynamic brain processes have exquisite temporal resolution and allow the study of neural networks underlying responses to sensory and cognitive events, thus providing a closer link to the physiology underlying them. Here, we have reviewed EROs in the study of alcoholism, their usefulness in understanding dynamical brain functions/dysfunctions associated with alcoholism as well as their utility as effective endophenotypes to identify and understand genes associated with both brain oscillations and alcoholism.

  12. Event-related potential alterations in fragile X syndrome

    Directory of Open Access Journals (Sweden)

    Inga Sophia eKnoth

    2012-09-01

    Full Text Available Fragile X Syndrome (FXS is the most common form of X-linked intellectual disability, associated with a wide range of cognitive and behavioural impairments. FXS is caused by a trinucleotide repeat expansion in the FMR1 gene located on the X-chromosome. FMR1 is expected to prevent the expression of the fragile X mental retardation protein (FMRP, which results in altered structural and functional development of the synapse, including a loss of synaptic plasticity. This review aims to unveil the contribution of electrophysiological signal studies for the understanding of the information processing impairments in FXS patients. We discuss relevant event-related potential (ERP studies conducted with full mutation FXS patients and clinical populations sharing symptoms with FXS in a developmental perspective. Specific deviances found in FXS ERP profiles are described. Alterations are reported in N1, P2, Mismatch Negativity (MMN, N2 and P3 components in FXS compared to healthy controls. Particularly, deviances in N1 and P2 amplitude seem to be specific to FXS. The presented results suggest a cascade of impaired information processes that are in line with symptoms and anatomical findings in FXS.

  13. Event-related potential alterations in fragile X syndrome.

    Science.gov (United States)

    Knoth, Inga S; Lippé, Sarah

    2012-01-01

    Fragile X Syndrome (FXS) is the most common form of X-linked intellectual disability (ID), associated with a wide range of cognitive and behavioral impairments. FXS is caused by a trinucleotide repeat expansion in the FMR1 gene located on the X-chromosome. FMR1 is expected to prevent the expression of the "fragile X mental retardation protein (FMRP)", which results in altered structural and functional development of the synapse, including a loss of synaptic plasticity. This review aims to unveil the contribution of electrophysiological signal studies for the understanding of the information processing impairments in FXS patients. We discuss relevant event-related potential (ERP) studies conducted with full mutation FXS patients and clinical populations sharing symptoms with FXS in a developmental perspective. Specific deviances found in FXS ERP profiles are described. Alterations are reported in N1, P2, Mismatch Negativity (MMN), N2, and P3 components in FXS compared to healthy controls. Particularly, deviances in N1 and P2 amplitude seem to be specific to FXS. The presented results suggest a cascade of impaired information processes that are in line with symptoms and anatomical findings in FXS.

  14. Event-related potentials associated with Attention Network Test.

    Science.gov (United States)

    Neuhaus, Andres H; Urbanek, Carsten; Opgen-Rhein, Carolin; Hahn, Eric; Ta, Thi Minh Tam; Koehler, Simone; Gross, Melanie; Dettling, Michael

    2010-05-01

    Selective visual attention is thought to be comprised of distinct neuronal networks that serve different attentional functions. The Attention Network Test (ANT) has been introduced to allow for assessment of alerting, orienting, and response inhibition. Information on associated measures of neural processing during ANT is still scarce. We topographically analyzed top-down ANT effects on visual event-related potential morphology in 44 healthy participants. Significant reaction time effects were obtained for all attention networks. Posterior cue-locked target N1 amplitude was significantly increased during both alerting and orienting. P3 amplitude was significantly modulated at frontal and parietal leads as a function of inhibition. Our data suggests that attentional mechanisms of alerting and orienting are employed simultaneously at early stages of the visual processing stream to amplify perceptual discrimination and load onto the same ERP component. Fronto-parietal modulations of P3 amplitude seem to mirror both response inhibition and visual target detection and may be interesting markers for further studies.

  15. Visual motion event related potentials distinguish aging and Alzheimer's disease.

    Science.gov (United States)

    Fernandez, Roberto; Monacelli, Anthony; Duffy, Charles J

    2013-01-01

    Aging and Alzheimer's disease (AD) disrupt visuospatial processing and visual motion evoked potentials in a manner linked to navigational deficits. Our goal is to determine if aging and AD have distinct effects on visual cortical motion processing for navigation. We recorded visual motion event related potentials (ERPs) in young (YNC) and older normal controls (ONC), and early AD patients (EADs) who viewed rapidly changing optic flow stimuli that simulate naturalistic changes in heading direction, like those that occur when following a path of self-movement through the environment. After a random series of optic flow stimuli, a vertical motion stimulus was presented to verify sustained visual attention by demanding a rapid push-button response. Optic flow evokes robust ERPs that are delayed in aging and diminished in AD. The interspersed vertical motion stimuli yielded shorter N200 latencies in EADs, matching those in ONCs, but the EADs' N200 amplitudes remained small. Aging and AD have distinct effects on visual sensory processing: aging delays evoked response, whereas AD diminishes responsiveness.

  16. Event-related Potential Signatures of Relational Memory

    Science.gov (United States)

    Hannula, Deborah E.; Federmeier, Kara D.; Cohen, Neal J.

    2009-01-01

    Various lines of evidence suggest that memory for the relations among arbitrarily paired items acquired prior to testing can influence early processing of a probe stimulus. The event-related potential experiment reported here was designed to explore how early in time memory for a previously established face-scene relationship begins to influence processing of faces, under sequential presentation conditions in which a preview of the scene can promote expectancies about the to-be-presented face. Prior to the current work, the earliest component documented to be sensitive to memory for the relations among arbitrarily paired items was the late positive complex (LPC), but here relational memory effects were evident as early as 270-350 msec after face onset. The latency of these relational memory effects suggests that they may be the precursor to similar effects observed in eye movement behavior. As expected, LPC amplitude was also affected by memory for face-scene relationships, and N400 amplitude reflected some combination of memory for items and memory for the relations among items. PMID:17069477

  17. Auditory stream segregation in children with Asperger syndrome

    OpenAIRE

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E. (Eira); Nieminen-von Wendt, T.; Kujala, T. (Tiia)

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically deve...

  18. Tracking the implicit self using event-related potentials.

    Science.gov (United States)

    Egenolf, Yvonne; Stein, Maria; Koenig, Thomas; Grosse Holtforth, Martin; Dierks, Thomas; Caspar, Franz

    2013-12-01

    Negative biases in implicit self-evaluation are thought to be detrimental to subjective well-being and have been linked to various psychological disorders, including depression. An understanding of the neural processes underlying implicit self-evaluation in healthy subjects could provide a basis for the investigation of negative biases in depressed patients, the development of differential psychotherapeutic interventions, and the estimation of relapse risk in remitted patients. We thus studied the brain processes linked to implicit self-evaluation in 25 healthy subjects using event-related potential (ERP) recording during a self-relevant Implicit Association Test (sIAT). Consistent with a positive implicit self-evaluation in healthy subjects, they responded significantly faster to the congruent (self-positive mapping) than to the incongruent sIAT condition (self-negative mapping). Our main finding was a topographical ERP difference in a time window between 600 and 700 ms, whereas no significant differences between congruent and incongruent conditions were observed in earlier time windows. This suggests that biases in implicit self-evaluation are reflected only indirectly, in the additional recruitment of control processes needed to override the positive implicit self-evaluation of healthy subjects in the incongruent sIAT condition. Brain activations linked to these control processes can thus serve as an indirect measure for estimating biases in implicit self-evaluation. The sIAT paradigm, combined with ERP, could therefore permit the tracking of the neural processes underlying implicit self-evaluation in depressed patients during psychotherapy.

  19. Nicotine and attention: event-related potential investigations in nonsmokers.

    Science.gov (United States)

    Knott, Verner; Shah, Dhrasti; Fisher, Derek; Millar, Anne; Prise, Stephanie; Scott, Terri Lynn; Thompson, Mackenzie

    2009-01-01

    Research into the effects of nicotine and smoking on cognition has largely confirmed the subjective reports of smoking in smokers on mental functions, showing smoking abstinence to disrupt and smoking/nicotine to restore cognitive functioning. Evidence of performance improvements in nonsmokers has provided partial support for the absolute effects of nicotine on cognitive processes, which are independent of withdrawal relief, but the mechanisms underlying its pro-cognitive properties still remain elusive. The attentional facilitation frequently reported with smoking/nicotine may be indirectly related to its diffuse arousal-enhancing actions, as evidenced by electroencephalographic (EEG) fast frequency power increments, or it may reflect nicotine's direct modulating effects on specific neural processes governing stimulus encoding, selection and rejection. Event-related potential (ERP) components extracted during the performance of cognitive tasks have proven to be sensitive to early pre-attentive and later attention-dependent processes that are not otherwise reflected in behavioral probes. To date, the majority of ERP studies have been conducted with smokers using passive non-task paradigms or relatively non-demanding "oddball" tasks. This paper will emphasize our recent ERP investigations with acute nicotine polacrilex (6 mg) administered to nonsmokers, and with a battery of ERP and behavioral performance paradigms focusing on intra- and inter-modal selective attention and distraction processes. These ERP findings of nicotine-augmented early attentional processing add support to the contention that nicotine may be be used by smokers as a "pharmacological tool" for tuning cognitive functions relating to the automatic and controlled aspects of sensory input detection and selection.

  20. Iconic Meaning in Music: An Event-Related Potential Study.

    Science.gov (United States)

    Cai, Liman; Huang, Ping; Luo, Qiuling; Huang, Hong; Mo, Lei

    2015-01-01

    Although there has been extensive research on the processing of the emotional meaning of music, little is known about other aspects of listeners' experience of music. The present study investigated the neural correlates of the iconic meaning of music. Event-related potentials (ERP) were recorded while a group of 20 music majors and a group of 20 non-music majors performed a lexical decision task in the context of implicit musical iconic meaning priming. ERP analysis revealed a significant N400 effect of congruency in time window 260-510 ms following the onset of the target word only in the group of music majors. Time-course analysis using 50 ms windows indicated significant N400 effects both within the time window 410-460 ms and 460-510 ms for music majors, whereas only a partial N400 effect during time window 410-460 ms was observed for non-music majors. There was also a trend for the N400 effects in the music major group to be stronger than those in the non-major group in the sub-windows of 310-360 ms and 410-460 ms. Especially in the sub-window of 410-460 ms, the topographical map of the difference waveforms between congruent and incongruent conditions revealed different N400 distribution between groups; the effect was concentrated in bilateral frontal areas for music majors, but in central-parietal areas for non-music majors. These results imply probable neural mechanism differences underlying automatic iconic meaning priming of music. Our findings suggest that processing of the iconic meaning of music can be accomplished automatically and that musical training may facilitate the understanding of the iconic meaning of music.

  1. Variation in Event-Related Potentials by State Transitions.

    Science.gov (United States)

    Higashi, Hiroshi; Minami, Tetsuto; Nakauchi, Shigeki

    2017-01-01

    The probability of an event's occurrence affects event-related potentials (ERPs) on electroencephalograms. The relation between probability and potentials has been discussed by using a quantity called surprise that represents the self-information that humans receive from the event. Previous studies have estimated surprise based on the probability distribution in a stationary state. Our hypothesis is that state transitions also play an important role in the estimation of surprise. In this study, we compare the effects of surprise on the ERPs based on two models that generate an event sequence: a model of a stationary state and a model with state transitions. To compare these effects, we generate the event sequences with Markov chains to avoid a situation that the state transition probability converges with the stationary probability by the accumulation of the event observations. Our trial-by-trial model-based analysis showed that the stationary probability better explains the P3b component and the state transition probability better explains the P3a component. The effect on P3a suggests that the internal model, which is constantly and automatically generated by the human brain to estimate the probability distribution of the events, approximates the model with state transitions because Bayesian surprise, which represents the degree of updating of the internal model, is highly reflected in P3a. The global effect reflected in P3b, however, may not be related to the internal model because P3b depends on the stationary probability distribution. The results suggest that an internal model can represent state transitions and the global effect is generated by a different mechanism than the one for forming the internal model.

  2. Iconic Meaning in Music: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Liman Cai

    Full Text Available Although there has been extensive research on the processing of the emotional meaning of music, little is known about other aspects of listeners' experience of music. The present study investigated the neural correlates of the iconic meaning of music. Event-related potentials (ERP were recorded while a group of 20 music majors and a group of 20 non-music majors performed a lexical decision task in the context of implicit musical iconic meaning priming. ERP analysis revealed a significant N400 effect of congruency in time window 260-510 ms following the onset of the target word only in the group of music majors. Time-course analysis using 50 ms windows indicated significant N400 effects both within the time window 410-460 ms and 460-510 ms for music majors, whereas only a partial N400 effect during time window 410-460 ms was observed for non-music majors. There was also a trend for the N400 effects in the music major group to be stronger than those in the non-major group in the sub-windows of 310-360 ms and 410-460 ms. Especially in the sub-window of 410-460 ms, the topographical map of the difference waveforms between congruent and incongruent conditions revealed different N400 distribution between groups; the effect was concentrated in bilateral frontal areas for music majors, but in central-parietal areas for non-music majors. These results imply probable neural mechanism differences underlying automatic iconic meaning priming of music. Our findings suggest that processing of the iconic meaning of music can be accomplished automatically and that musical training may facilitate the understanding of the iconic meaning of music.

  3. Attention within Auditory Word Perception.

    Science.gov (United States)

    1985-11-01

    Orasanu Dr. Steven Pinker Army Research Institute Department of Psychology 5001 Eisenhower Avenue E10-018 Alexandria, VA 22333 M.I.T. Cambridge, MA 02139...Village Dr. Steven W. Keele Apt. # 15J Department of Psychology New York, NY 10012 University of Oregon Eugene, OR 97403 r - - . 7- 7. -0. ------- 1985110...CO 80309 3939 O’Hara Street Pittsburgh, PA 15213 Dr. Steven E. Poltrock MCC Dr. Mary S. Riley 9430 Research Blvd. Program in Cognitive Science Echelon

  4. Cognitive mechanisms associated with auditory sensory gating

    OpenAIRE

    Jones, L. A.; Hills, P.J.; Dick, K.M.; Jones, S. P.; Bright, P

    2015-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants addit...

  5. Auditory imagery and the poor-pitch singer.

    Science.gov (United States)

    Pfordresher, Peter Q; Halpern, Andrea R

    2013-08-01

    The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.

  6. Differential cognitive responses to guqin music and piano music in Chinese subjects: an event-related potential study.

    Science.gov (United States)

    Zhu, Wei-Na; Zhang, Jun-Jun; Liu, Hai-Wei; Ding, Xiao-Jun; Ma, Yuan-Ye; Zhou, Chang-Le

    2008-02-01

    To compare the cognitive effects of guqin (the oldest Chinese instrument) music and piano music. Behavioral and event-related potential (ERP) data in a standard two-stimulus auditory oddball task were recorded and analyzed. This study replicated the previous results of culture-familiar music effect on Chinese subjects: the greater P300 amplitude in frontal areas in a culture-familiar music environment. At the same time, the difference between guqin music and piano music was observed in N1 and later positive complex (LPC: including P300 and P500): a relatively higher participation of right anterior-temporal areas in Chinese subjects. The results suggest that the special features of ERP responses to guqin music are the outcome of Chinese tonal language environments given the similarity between Guqinos tones and Mandarin lexical tones.

  7. Event-related potentials as a measure of sleep disturbance: A tutorial review

    Directory of Open Access Journals (Sweden)

    Kenneth Campbell

    2010-01-01

    Full Text Available This article reviews event-related potentials (ERPs the minute responses of the human brain that are elicited by external auditory stimuli and how the ERPs can be used to measure sleep disturbance. ERPs consist of a series of negative- and positive-going components. A negative component peaking at about 100 ms, N1, is thought to reflect the outcome of a transient detector system, activated by change in the transient energy in an acoustic stimulus. Its output and thus the amplitude of N1 increases as the intensity level of the stimulus is increased and when the rate of presentation is slowed. When the output reaches a certain critical level, operations of the central executive are interrupted and attention is switched to the auditory channel. This switching of attention is thought to be indexed by a later positivity, P3a, peaking between 250 and 300 ms. In order to sleep, consciousness for all but the most relevant of stimuli must be prevented. Thus, during sleep onset and definitive non-rapid eye movement (NREM sleep, the amplitude of N1 diminishes to near-baseline level. The amplitude of P2, peaking from 180 to 200 ms, is however larger in NREM sleep than in wakefulness. P2 is thought to reflect an inhibitory process protecting sleep from irrelevant disturbance. As stimulus input becomes increasingly obtrusive, the amplitude of P2 also increases. With increasing obtrusiveness particularly when stimuli are presented slowly, a later large negativity, peaking at about 350 ms, N350, becomes apparent. N350 is unique to sleep, its amplitude also increasing as the stimulus becomes more obtrusive. Many authors postulate that when the N350 reaches a critical amplitude, a very large amplitude N550, a component of the K-Complex is elicited. The K-Complex can only be elicited during NREM sleep. The P2, N350 and N550 processes are thus conceived as sleep protective mechanisms, activated sequentially as the risk for disturbance increases. During REM sleep

  8. Separating acoustic deviance from novelty during the first year of life: A review of event related potential evidence

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-09-01

    Full Text Available Orienting to salient events in the environment is a first step in the development of attention in young infants. Electrophysiological studies have indicated that in newborns and young infants, sounds with widely distributed spectral energy, such as noise and various environmental sounds, as well as sounds widely deviating from their context elicit an event related potential (ERP similar to the adult P3a response. We discuss how the maturation of event-related potentials parallels the process of the development of passive auditory attention during the first year of life. Behavioural studies have indicated that the neonatal orientation to high energy stimuli gradually changes to attending to genuine novelty and other significant events by approximately 9 months of age. In accordance with these changes, in newborns, the ERP response to large acoustic deviance is dramatically larger than that to small and moderate deviations. This ERP difference, however, rapidly decreases within first months of life and the differentiation of the ERP response to genuine novelty from that to spectrally rich but repeatedly presented sounds commences during the same period. The relative decrease of the response amplitudes elicited by high energy stimuli may reflect development of an inhibitory brain network suppressing the processing of uninformative stimuli. Based on data obtained from healthy full term and pre term infants as well as from infants at risk for various developmental problems, we suggest that the electrophysiological indices of the processing of acoustic and contextual deviance may be indicative of the functioning of auditory attention, a crucial prerequisite of learning and language development.

  9. Representations of physical plausibility revealed by event-related potentials.

    Science.gov (United States)

    Roser, Matthew E; Fugelsang, Jonathan A; Handy, Todd C; Dunbar, Kevin N; Gazzaniga, Michael S

    2009-08-05

    Maintaining an accurate mental representation of the current environment is crucial to detecting change in that environment and ensuring behavioral coherence. Past experience with interactions between objects, such as collisions, has been shown to influence the perception of object interactions. To assess whether mental representations of object interactions derived from experience influence the maintenance of a mental model of the current stimulus environment, we presented physically plausible and implausible collision events while recording brain electrical activity. The parietal P300 response to 'oddball' events was found to be modulated by the physical plausibility of the stimuli, suggesting that past experience of object interactions can influence working memory processes involved in monitoring ongoing changes to the environment.

  10. Brain Network Activation Analysis Utilizing Spatiotemporal Features for Event Related Potentials Classification

    Directory of Open Access Journals (Sweden)

    Yaki Stern

    2016-12-01

    Full Text Available The purpose of this study was to introduce an improved tool for automated classification of event-related potentials (ERPs using spatiotemporally parcellated events incorporated into a functional brain network activation (BNA analysis. The auditory oddball ERP paradigm was selected to demonstrate and evaluate the improved tool. Methods: The ERPs of each subject were decomposed into major dynamic spatiotemporal events. Then, a set of spatiotemporal events representing the group was generated by aligning and clustering the spatiotemporal events of all individual subjects. The temporal relationship between the common group events generated a network, which is the spatiotemporal reference BNA model. Scores were derived by comparing each subject’s spatiotemporal events to the reference BNA model and were then entered into a support vector machine classifier to classify subjects into relevant subgroups. The reliability of the BNA scores (test-retest repeatability using intraclass correlation and their utility as a classification tool were examined in the context of Target-Novel classification. Results: BNA intraclass correlation values of repeatability ranged between 0.51 and 0.82 for the known ERP components N100, P200 and P300. Classification accuracy was high when the trained data were validated on the same subjects for different visits (AUCs 0.93 and 0.95. The classification accuracy remained high for a test group recorded at a different clinical center with a different recording system (AUCs 0.81, 0.85 for 2 visits. Conclusion: The improved spatiotemporal BNA analysis demonstrates high classification accuracy. The BNA analysis method holds promise as a tool for diagnosis, follow-up and drug development associated with different neurological conditions.

  11. An Event Related Field Study of Rapid Grammatical Plasticity in Adult Second-Language Learners.

    Science.gov (United States)

    Bastarrika, Ainhoa; Davidson, Douglas J

    2017-01-01

    The present study used magnetoencephalography (MEG) to investigate how Spanish adult learners of Basque respond to morphosyntactic violations after a short period of training on a small fragment of Basque grammar. Participants (n = 17) were exposed to violation and control phrases in three phases (pretest, training, generalization-test). In each phase participants listened to short Basque phrases and they judged whether they were correct or incorrect. During the pre-test and generalization-test, participants did not receive any feedback. During the training blocks feedback was provided after each response. We also ran two Spanish control blocks before and after training. We analyzed the event-related magnetic- field (ERF) recorded in response to a critical word during all three phases. In the pretest, classification was below chance and we found no electrophysiological differences between violation and control stimuli. Then participants were explicitly taught a Basque grammar rule. From the first training block participants were able to correctly classify control and violation stimuli and an evoked violation response was present. Although the timing of the electrophysiological responses matched participants' L1 effect, the effect size was smaller for L2 and the topographical distribution differed from the L1. While the L1 effect was bilaterally distributed on the auditory sensors, the L2 effect was present at right frontal sensors. During training blocks two and three, the violation-control effect size increased and the topography evolved to a more L1-like pattern. Moreover, this pattern was maintained in the generalization test. We conclude that rapid changes in neuronal responses can be observed in adult learners of a simple morphosyntactic rule, and that native-like responses can be achieved at least in small fragments of second language.

  12. Event-related potential correlates of paranormal ideation and unusual experiences.

    Science.gov (United States)

    Sumich, Alex; Kumari, Veena; Gordon, Evian; Tunstall, Nigel; Brammer, Michael

    2008-01-01

    Separate dimensions of schizotypy have been differentially associated with electrophysiological measures of brain function, and further shown to be modified by sex/gender. We investigated event-related potential (ERP) correlates of two subdimensions of positive schizotypy, paranormal ideation (PI) and unusual experiences (UEs). Seventy-two individuals with no psychiatric diagnosis (men=36) completed self-report measures of UE and PI and performed an auditory oddball task. Average scores for N100, N200 and P300 amplitudes were calculated for left and right anterior, central and posterior electrode sites. Multiple linear regression was used to examine the relationships between the measures of schizotypy and ERPs across the entire sample, as well as separately according to sex. PI was inversely associated with P300 amplitude at left-central sites across the entire sample, and at right-anterior electrodes in women only. Right-anterior P300 and right-posterior N100 amplitudes were negatively associated with UE in women only. Across the entire sample, UE was negatively associated with left-central N100 amplitude, and positively associated with left-anterior N200 amplitude. These results provide support from electrophysiological measures for the fractionation of the positive dimension of schizotypy into subdimensions of PI and UE, and lend indirect support to dimensional or quasidimensional conceptions of psychosis. More specifically, they suggest that PI may be associated with alteration in contextual updating processes, and that UE may reflect altered sensory/early-attention (N100) mechanisms. The sex differences observed are consistent with those previously observed in individuals with schizophrenia.

  13. An Event Related Field Study of Rapid Grammatical Plasticity in Adult Second-Language Learners

    Science.gov (United States)

    Bastarrika, Ainhoa; Davidson, Douglas J.

    2017-01-01

    The present study used magnetoencephalography (MEG) to investigate how Spanish adult learners of Basque respond to morphosyntactic violations after a short period of training on a small fragment of Basque grammar. Participants (n = 17) were exposed to violation and control phrases in three phases (pretest, training, generalization-test). In each phase participants listened to short Basque phrases and they judged whether they were correct or incorrect. During the pre-test and generalization-test, participants did not receive any feedback. During the training blocks feedback was provided after each response. We also ran two Spanish control blocks before and after training. We analyzed the event-related magnetic- field (ERF) recorded in response to a critical word during all three phases. In the pretest, classification was below chance and we found no electrophysiological differences between violation and control stimuli. Then participants were explicitly taught a Basque grammar rule. From the first training block participants were able to correctly classify control and violation stimuli and an evoked violation response was present. Although the timing of the electrophysiological responses matched participants' L1 effect, the effect size was smaller for L2 and the topographical distribution differed from the L1. While the L1 effect was bilaterally distributed on the auditory sensors, the L2 effect was present at right frontal sensors. During training blocks two and three, the violation-control effect size increased and the topography evolved to a more L1-like pattern. Moreover, this pattern was maintained in the generalization test. We conclude that rapid changes in neuronal responses can be observed in adult learners of a simple morphosyntactic rule, and that native-like responses can be achieved at least in small fragments of second language. PMID:28174530

  14. Differential Effects of Active Attention and Age on Event-related Potentials to Visual and Olfactory Stimuli

    Science.gov (United States)

    Morgan, Charlie D.; Murphy, Claire

    2011-01-01

    Normal aging impairs olfactory functioning both centrally and peripherally. The P3 peak of the event related potential (ERP), evoked by active response to a target stimulus, is considered a reflection of central cognitive processing. It can also be evoked in a passive task to both auditory and visual stimuli. Our goal was to investigate whether age influences amplitude and latency of the ERP differentially in active and passive tasks to olfactory stimuli. Olfactory and visual event-related potentials were elicited with a single-stimulus paradigm in separate active and passive task response conditions. Participants included 30 healthy individuals from three age groups, young, middle age, and older adults. Results indicated that P3 ERP latency increased with age in both sensory modalities. P3 latencies for active versus passive tasks were similar across age groups for visual ERPs, but in the olfactory modality, older adults demonstrated significantly longer latencies in the passive task compared to the active task. Future directions should include research on specific clinical populations utilizing active versus passive task conditions. PMID:20688110

  15. Assessing the spatiotemporal evolution of neuronal activation with single-trial event-related potentials and functional MRI.

    Science.gov (United States)

    Eichele, Tom; Specht, Karsten; Moosmann, Matthias; Jongsma, Marijtje L A; Quiroga, Rodrigo Quian; Nordby, Helge; Hugdahl, Kenneth

    2005-12-06

    The brain acts as an integrated information processing system, which methods in cognitive neuroscience have so far depicted in a fragmented fashion. Here, we propose a simple and robust way to integrate functional MRI (fMRI) with single trial event-related potentials (ERP) to provide a more complete spatiotemporal characterization of evoked responses in the human brain. The idea behind the approach is to find brain regions whose fMRI responses can be predicted by paradigm-induced amplitude modulations of simultaneously acquired single trial ERPs. The method was used to study a variant of a two-stimulus auditory target detection (odd-ball) paradigm that manipulated predictability through alternations of stimulus sequences with random or regular target-to-target intervals. In addition to electrophysiologic and hemodynamic evoked responses to auditory targets per se, single-trial modulations were expressed during the latencies of the P2 (170-ms), N2 (200-ms), and P3 (320-ms) components and predicted spatially separated fMRI activation patterns. These spatiotemporal matches, i.e., the prediction of hemodynamic activation by time-variant information from single trial ERPs, permit inferences about regional responses using fMRI with the temporal resolution provided by electrophysiology.

  16. Neurodynamics for auditory stream segregation: tracking sounds in the mustached bat's natural environment.

    Science.gov (United States)

    Kanwal, Jagmeet S; Medvedev, Andrei V; Micheyl, Christophe

    2003-08-01

    During navigation and the search phase of foraging, mustached bats emit approximately 25 ms long echolocation pulses (at 10-40 Hz) that contain multiple harmonics of a constant frequency (CF) component followed by a short (3 ms) downward frequency modulation. In the context of auditory stream segregation, therefore, bats may either perceive a coherent pulse-echo sequence (PEPE...), or segregated pulse and echo streams (P-P-P... and E-E-E...). To identify the neural mechanisms for stream segregation in bats, we developed a simple yet realistic neural network model with seven layers and 420 nodes. Our model required recurrent and lateral inhibition to enable output nodes in the network to 'latch-on' to a single tone (corresponding to a CF component in either the pulse or echo), i.e., exhibit differential suppression by the alternating two tones presented at a high rate (> 10 Hz). To test the applicability of our model to echolocation, we obtained neurophysiological data from the primary auditory cortex of awake mustached bats. Event-related potentials reliably reproduced the latching behaviour observed at output nodes in the network. Pulse as well as nontarget (clutter) echo CFs facilitated this latching. Individual single unit responses were erratic, but when summed over several recording sites, they also exhibited reliable latching behaviour even at 40 Hz. On the basis of these findings, we propose that a neural correlate of auditory stream segregation is present within localized synaptic activity in the mustached bat's auditory cortex and this mechanism may enhance the perception of echolocation sounds in the natural environment.

  17. Auditory excitation patterns : the significance of the pulsation threshold method for the measurement of auditory nonlinearity

    NARCIS (Netherlands)

    H. Verschuure (Hans)

    1978-01-01

    textabstractThe auditory system is the toto[ of organs that translates an acoustical signal into the perception of a sound. An acoustic signal is a vibration. It is decribed by physical parameters. The perception of sound is the awareness of a signal being present and the attribution of certain qual

  18. Event-related alpha suppression in response to facial motion.

    Science.gov (United States)

    Girges, Christine; Wright, Michael J; Spencer, Janine V; O'Brien, Justin M D

    2014-01-01

    While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors.

  19. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  20. Music perception in cochlear implant users: an event-related potential study.

    Science.gov (United States)

    Koelsch, Stefan; Wittfoth, Matthias; Wolf, Angelika; Müller, Joachim; Hahne, Anja

    2004-04-01

    Compare the processing of music-syntactic irregularities and physical oddballs between cochlear implant (CI) users and matched controls. Musical chord sequences were presented, some of which contained functionally irregular chords, or a chord with an instrumental timbre that deviated from the standard timbre. In both controls and CI users, functionally irregular chords elicited early (around 200 ms) and late (around 500 ms) negative electric brain responses (early right anterior negativity,ERAN and N5). Amplitudes of effects depended on the degree of music-syntactic irregularity in both groups; effects elicited in CI users were distinctly smaller than in controls. Physically deviant chords elicited a timbre-mismatch negativity (MMN) and a P3 in both groups, again with smaller amplitudes in CI users. ERAN and N5 (as well as timbre-MMN and P3), can be elicited in CI users. Although amplitudes of effects were considerably smaller in the CI group, the presence of MMN and ERAN indicates that neural mechanisms of both physical and music-syntactic irregularity-detection were active in this group.

  1. Binaural technology for e.g. rendering auditory virtual environments

    DEFF Research Database (Denmark)

    Hammershøi, Dorte

    2008-01-01

    , helped mediate the understanding that if the transfer functions could be mastered, then important dimensions of the auditory percept could also be controlled. He early understood the potential of using the HRTFs and numerical sound transmission analysis programs for rendering auditory virtual...... environments. Jens Blauert participated in many European cooperation projects exploring  this field (and others), among other the SCATIS project addressing the auditory-tactile dimensions in the absence of visual information....

  2. Perspectives on the design of musical auditory interfaces

    OpenAIRE

    Leplatre, G.; Brewster, S.A.

    1998-01-01

    This paper addresses the issue of music as a communication medium in auditory human-computer interfaces. So far, psychoacoustics has had a great influence on the development of auditory interfaces, directly and through music cognition. We suggest that a better understanding of the processes involved in the perception of actual musical excerpts should allow musical auditory interface designers to exploit the communicative potential of music. In this respect, we argue that the real advantage of...

  3. Priming emotional facial expressions as evidenced by event-related brain potentials.

    Science.gov (United States)

    Werheid, Katja; Alpay, Gamze; Jentzsch, Ines; Sommer, Werner

    2005-02-01

    As human faces are important social signals in everyday life, processing of facial affect has recently entered into the focus of neuroscientific research. In the present study, priming of faces showing the same emotional expression was measured with the help of event-related potentials (ERPs) in order to investigate the temporal characteristics of processing facial expressions. Participants classified portraits of unfamiliar persons according to their emotional expression (happy or angry). The portraits were either preceded by the face of a different person expressing the same affect (primed) or the opposite affect (unprimed). ERPs revealed both early and late priming effects, independent of stimulus valence. The early priming effect was characterized by attenuated frontal ERP amplitudes between 100 and 200 ms in response to primed targets. Its dipole sources were localised in the inferior occipitotemporal cortex, possibly related to the detection of expression-specific facial configurations, and in the insular cortex, considered to be involved in affective processes. The late priming effect, an enhancement of the late positive potential (LPP) following unprimed targets, may evidence greater relevance attributed to a change of emotional expressions. Our results (i) point to the view that a change of affect-related facial configuration can be detected very early during face perception and (ii) support previous findings on the amplitude of the late positive potential being rather related to arousal than to the specific valence of an emotional signal.

  4. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  5. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  6. Bimodal audio-visual training enhances auditory adaptation process.

    Science.gov (United States)

    Kawase, Tetsuaki; Sakamoto, Shuichi; Hori, Yoko; Maki, Atsuko; Suzuki, Yôiti; Kobayashi, Toshimitsu

    2009-09-23

    Effects of auditory training with bimodal audio-visual stimuli on monomodal aural speech intelligibility were examined in individuals with normal hearing using highly degraded noise-vocoded speech sound. Visual cue simultaneously presented with auditory stimuli during the training session significantly improved auditory speech intelligibility not only for words used in the training session, but also untrained words, when compared with the auditory training using only auditory stimuli. Visual information is generally considered to complement insufficient speech information conveyed by the auditory system during audio-visual speech perception. However, the present results showed another beneficial effect of audio-visual training that the visual cue enhances the auditory adaptation process to the degraded new speech sound, which is different from those given during bimodal training.

  7. 视、听感知下模拟驾驶行为的脑电实验比较研究%Research on EEG for virtual driving between visual and auditory perception

    Institute of Scientific and Technical Information of China (English)

    梁静坤; 徐桂芝; 孙辉

    2011-01-01

    Objective To evaluate the effect of electroencephalography(EEG) recognition rate resulted by stimulus of visual, auditory and combined perception. Methods In virtual traffic environment, the stimuli were designed in traffic information based on visual, auditory and audio-visual fusion. When one of the three stimuli appeared, the subjects completed starting and braking vehicle by imaginary using of right and left hands at the same time. The EEG signals were recorded and the imaginary motion-related features from C3, C4 were extracted and classified. Results The best recognition rates in visual, auditory and fusion perception were 100%, 100%and 83%, respectively, while the averages of the rates were 68.8% 、82.2%、76.9%, respectively. Conclusion The recognition rates are affected by the audio, visual and fusion stimulus and apparent individual differences exist.%目的 研究基于视觉、听觉以及视听融合感知的刺激源对脑电(EEG)识别率的影响。方法试验分别从视觉、听觉和视听融合3种情况提供与交通环境相关的刺激源,在不同刺激源的提示下,受试者利用左右手想象运动对车辆进行启动和制动控制,并对采集的脑电数据进行特征提取和分类处理,得到与想象运动相关的C3、C4导联上的特征信号和分类结果。结果视觉、听觉、视听融合环境下5名受试者最高识别率分别为100%、100%、83.3%,平均识别率分别为68.8%、82.2%、76.9%。结论基于视听感官的刺激源对脑电的识别率有一定影响,并且存在较大的个体差异。

  8. Auditory Dysfunction and Its Communicative Impact in the Classroom.

    Science.gov (United States)

    Friedrich, Brad W.

    1982-01-01

    The origins and nature of auditory dysfunction in school age children and the role of the audiologist in the evaluation of the learning disabled child are reviewed. Specific structures and mechanisms responsible for the reception and perception of auditory signals are specified. (Author/SEW)

  9. Visual and Auditory Input in Second-Language Speech Processing

    Science.gov (United States)

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  10. Effects of acute nicotine administration on cognitive event-related potentials in tacrine-treated and non-treated patients with Alzheimer's disease.

    Science.gov (United States)

    Knott, V; Mohr, E; Mahoney, C; Engeland, C; Ilivitsky, V

    2002-01-01

    Earlier studies of cognitive event-related brain potentials (ERPs) reporting diminished amplitudes and delayed latencies of the P300 potential in dementia of the Alzheimer type (DAT), together with independent findings of the P300- and performance-enhancing properties of nicotine in normal adults, stimulated this study to explore the single-dose effects of nicotine on auditory and visual P300s in DAT. Thirteen patients, 6 currently receiving treatment with the cholinesterase inhibitor tacrine (tetrahydroaminoacridine; THA) and the remaining being medication free, were administered 2 mg of nicotine polacrilex under double-blind, randomized, placebo-controlled conditions. Prior to nicotine administration, THA-treated patients exhibited shorter auditory P300 latencies than non-treated patients. Acutely administered nicotine failed to alter auditory P300, but increased the amplitudes of visual P300s in both DAT patient groups. Neither THA treatment nor single-dose nicotine altered behavioural performance in the visual and auditory task paradigms. The results are discussed in relation to nicotinic cholinergic, attentional and cognitive processes in DAT.

  11. Euclidean distance and Kolmogorov-Smirnov analyses of multi-day auditory event-related potentials: a longitudinal stability study

    Science.gov (United States)

    Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.

    2015-06-01

    The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.

  12. Contribution of harmonicity and location to auditory object formation in free field: Evidence from event-related brain potentials

    Science.gov (United States)

    McDonald, Kelly L.; Alain, Claude

    2005-09-01

    The contribution of location and harmonicity cues in sound segregation was investigated using behavioral reports and source waveforms derived from the scalp-recorded evoked potentials. Participants were presented with sounds composed of multiple harmonics in a free-field environment. The third harmonic was either tuned or mistuned and could be presented from the same or different location from the remaining harmonics. Presenting the third harmonic at a different location than the remaining harmonics increased the likelihood of hearing the tuned or slightly (i.e., 2%) mistuned harmonic as a separate object. Partials mistuned by 16% of their original value ``pop out'' of the complex and were paralleled by an object-related negativity (ORN) that superimposed the N1 and P2 components. For the 2% mistuned stimuli, the ORN was present only when the mistuned harmonic was presented at a different location than the remaining harmonics. Presenting the tuned harmonic at a different location also yielded changes in neural activity between 150 and 250 ms after sound onset. The behavioral and electrophysiological results indicate that listeners can segregate sounds based on harmonicity or location alone. The results also indicate that a conjunction of harmonicity and location cues contribute to sound segregation primarily when harmonicity is ambiguous.

  13. An Auditory Go/No-Go Study of Event-Related Potentials in Children with Fetal Alcohol Spectrum Disorders

    DEFF Research Database (Denmark)

    Steinmann, Tobias P.; Andrew, Colin M.; Thomsen, Carsten E.;

    2011-01-01

    and not to press the button when the No Go stimulus were heard. Task performance accuracy did not differ between the two groups, however differences were observed in the ERP components: P2, N2, and P3. The P2 amplitude were larger for Go trials in both groups. The FAS/PFAS group showed slower N2 response to Go...

  14. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  15. Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback

    Directory of Open Access Journals (Sweden)

    Alex eBrandmeyer

    2013-12-01

    Full Text Available Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP components such as the mismatch negativity (MMN. In the first part of the paper, we illustrate how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.

  16. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  17. Event-related potentials to task-irrelevant changes in facial expressions

    Directory of Open Access Journals (Sweden)

    Astikainen Piia

    2009-07-01

    Full Text Available Abstract Background Numerous previous experiments have used oddball paradigm to study change detection. This paradigm is applied here to study change detection of facial expressions in a context which demands abstraction of the emotional expression-related facial features among other changing facial features. Methods Event-related potentials (ERPs were recorded in adult humans engaged in a demanding auditory task. In an oddball paradigm, repeated pictures of faces with a neutral expression ('standard', p = .9 were rarely replaced by pictures with a fearful ('fearful deviant', p = .05 or happy ('happy deviant', p = .05 expression. Importantly, facial identities changed from picture to picture. Thus, change detection required abstraction of facial expression from changes in several low-level visual features. Results ERPs to both types of deviants differed from those to standards. At occipital electrode sites, ERPs to deviants were more negative than ERPs to standards at 150–180 ms and 280–320 ms post-stimulus. A positive shift to deviants at fronto-central electrode sites in the analysis window of 130–170 ms post-stimulus was also found. Waveform analysis computed as point-wise comparisons between the amplitudes elicited by standards and deviants revealed that the occipital negativity emerged earlier to happy deviants than to fearful deviants (after 140 ms versus 160 ms post-stimulus, respectively. In turn, the anterior positivity was earlier to fearful deviants than to happy deviants (110 ms versus 120 ms post-stimulus, respectively. Conclusion ERP amplitude differences between emotional and neutral expressions indicated pre-attentive change detection of facial expressions among neutral faces. The posterior negative difference at 150–180 ms latency resembled visual mismatch negativity (vMMN – an index of pre-attentive change detection previously studied only to changes in low-level features in vision. The positive anterior difference in

  18. Event-related potential studies of post-traumatic stress disorder: a critical review and synthesis

    Directory of Open Access Journals (Sweden)

    Javanbakht Arash

    2011-10-01

    Full Text Available Abstract Despite the sparseness of the currently available data, there is accumulating evidence of information processing impairment in post-traumatic stress disorder (PTSD. Studies of event-related potentials (ERPs are the main tool in real time examination of information processing. In this paper, we sought to critically review the ERP evidence of information processing abnormalities in patients with PTSD. We also examined the evidence supporting the existence of a relationship between ERP abnormalities and symptom profiles or severity in PTSD patients. An extensive Medline search was performed. Keywords included PTSD or post-traumatic stress disorder, electrophysiology or EEG, electrophysiology, P50, P100, N100, P2, P200, P3, P300, sensory gating, CNV (contingent negative variation and MMN (mismatch negativity. We limited the review to ERP adult human studies with control groups which were reported in the English language. After applying our inclusion-exclusion review criteria, 36 studies were included. Subjects exposed to wide ranges of military and civilian traumas were studied in these reports. Presented stimuli were both auditory and visual. The most widely studied components included P300, P50 gating, N100 and P200. Most of the studies reported increased P300 response to trauma-related stimuli in PTSD patients. A smaller group of studies reported dampening of responses or no change in responses to trauma-related and/or unrelated stimuli. P50 studies were strongly suggestive of impaired gating in patients with PTSD. In conclusion, the majority of reports support evidence of information processing abnormalities in patients with PTSD diagnosis. The predominance of evidence suggests presence of mid-latency and late ERP components differences in PTSD patients in comparison to healthy controls. Heterogeneity of assessment methods used contributes to difficulties in reaching firm conclusions regarding the nature of these differences. We suggest

  19. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    of the mRIR with an acoustic signal. The derivation of the mRIRs takes into account that (i) auditory localization is most sensitive to the location of the direct sound and (ii) that auditory localization performance is rather poor for early reflections and even worse for late reverberation. Throughout...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  20. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  1. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    OpenAIRE

    Hideaki Tanaka

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the ...

  2. Multisensory interactions elicited by audiovisual stimuli presented peripherally in a visual attention task: a behavioral and event-related potential study in humans.

    Science.gov (United States)

    Wu, Jinglong; Li, Qi; Bai, Ou; Touge, Tetsuo

    2009-12-01

    We applied behavioral and event-related potential measurements to study human multisensory interactions induced by audiovisual (AV) stimuli presented peripherally in a visual attention task in which an irrelevant auditory stimulus occasionally accompanied the visual stimulus. A stream of visual, auditory, and AV stimuli was randomly presented to the left or right side of the subjects; subjects covertly attended to the visual stimuli on either the left or right side and promptly responded to visual targets on that side. Behavioral results showed that responses to AV stimuli were faster and more accurate than those to visual stimuli only. Three event-related potential components related to AV interactions were identified: (1) over the right temporal area, approximately 200 to 220 milliseconds; (2) over the centromedial area, approximately 290 to 310 milliseconds; and (3) over the left and right ventral temporal area, approximately 290 to 310 milliseconds. We found that these interaction effects occurred slightly later than those reported in previously published AV interaction studies in which AV stimuli were presented centrally. Our results suggest that the retinotopic location of stimuli affects AV interactions occurring at later stages of cognitive processing in response to a visual attention task.

  3. A longitudinal, event-related potential pilot study of adult obsessive-compulsive disorder with 1-year follow-up

    Directory of Open Access Journals (Sweden)

    Yamamuro K

    2016-09-01

    Full Text Available Kazuhiko Yamamuro,1 Koji Okada,2 Naoko Kishimoto,1 Toyosaku Ota,1 Junzo Iida,3 Toshifumi Kishimoto1 1Department of Psychiatry, Nara Medical University School of Medicine, 2Department of Psychiatry, Jingumaecocorono-Clinic, 3Faculty of Nursing, Nara Medical University School of Medicine, Kashihara, Japan Aim: Earlier brain imaging research studies have suggested that brain abnormalities in obsessive-compulsive disorder (OCD normalize as clinical symptoms improve. However, although many studies have investigated event-related potentials (ERPs in patients with OCD compared with healthy control subjects, it is currently unknown whether ERP changes reflect pharmacological and psychotherapeutic effects. As such, the current study examined the neurocognitive components of OCD to elucidate the pathophysiological abnormalities involved in the disorder, including the frontal-subcortical circuits.Methods: The Yale-Brown Obsessive-Compulsive Scale was used to evaluate 14 adult patients with OCD. The present study also included ten age-, sex-, and IQ-matched controls. The P300 and mismatch negativity (MMN components during an auditory oddball task at baseline for both groups and after 1 year of treatment for patients with OCD were measured.Results: Compared with controls, P300 amplitude was attenuated in the OCD group at Cz and C4 at baseline. Pharmacotherapy and psychotherapy treatment for 1 year reduced OCD symptomology. P300 amplitude after 1 year of treatment was significantly increased, indicating normalization compared with baseline at Fz, Cz, C3, and C4. We found no differences in P300 latency, MMN amplitude, or MMN latency between baseline and after one year of treatment.Conclusion: ERPs may be a useful tool for evaluating pharmacological and cognitive behavioral therapy in adult patients with OCD. Keywords: obsessive-compulsive disorder, event-related potentials, P300, mismatch negativity, improvement

  4. Using event related potentials to identify a user's behavioural intention aroused by product form design.

    Science.gov (United States)

    Ding, Yi; Guo, Fu; Zhang, Xuefeng; Qu, Qingxing; Liu, Weilin

    2016-07-01

    The capacity of product form to arouse user's behavioural intention plays a decisive role in further user experience, even in purchase decision, while traditional methods rarely give a fully understanding of user experience evoked by product form, especially the feeling of anticipated use of product. Behavioural intention aroused by product form designs has not yet been investigated electrophysiologically. Hence event related potentials (ERPs) were applied to explore the process of behavioural intention when users browsed different smart phone form designs with brand and price not taken into account for mainly studying the brain activity evoked by variety of product forms. Smart phone pictures with different anticipated user experience were displayed with equiprobability randomly. Participants were asked to click the left mouse button when certain picture gave them a feeling of behavioural intention to interact with. The brain signal of each participant was recorded by Curry 7.0. The results show that pictures with an ability to arouse participants' behavioural intention for further experience can evoke enhanced N300 and LPPs (late positive potentials) in central-parietal, parietal and occipital regions. The scalp topography shows that central-parietal, parietal and occipital regions are more activated. The results indicate that the discrepancy of ERPs can reflect the neural activities of behavioural intention formed or not. Moreover, amplitude of ERPs occurred in corresponding brain areas can be used to measure user experience. The exploring of neural correlated with behavioural intention provide an accurate measurement method of user's perception and help marketers to know which product can arouse users' behavioural intention, maybe taken as an evaluating indicator of product design. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Applying Differentially Variable Component Analysis (dVCA) to Event-related Potentials

    Science.gov (United States)

    Shah, Ankoor S.; Knuth, Kevin H.; Lakatos, Peter; Schroeder, Charles E.

    2004-04-01

    Event-related potentials (ERPs) generated in response to multiple presentations of the same sensory stimulus vary from trial to trial. Accumulating evidence suggests that this variability relates to a similar trial-to-trial variation in the perception of the stimulus. In order to understand this variability, we previously developed differentially Variable Component Analysis (dVCA) as a method for defining dynamical components that contribute to the ERP. The underlying model asserted that: (i) multiple components comprise the ERP; (ii) these components vary in amplitude and latency from trial to trial; and (iii) these components may co-vary. A Bayesian framework was used to derive maximum a posteriori solutions to estimate these components and their latency and amplitude variability. Our original goal in developing dVCA was to produce a method for automated estimation of components in ERPs. However, we discovered that it is better to apply the algorithm in stages because of the complexity of the ERP and to use the results to define interesting subsets of the data, which are further analyzed independently. This paper describes this method and illustrates its application to actual neural signals recorded in response to a visual stimulus. Interestingly, dVCA of these data suggests two distinct response modes (or states) with differing components and variability. Furthermore, analyses of residual signals obtained by subtracting the estimated components from the actual data illustrate gamma-frequency (circa 40 Hz) oscillations, which may underlie communication between various brain regions. These findings demonstrate the power of dVCA and underscore the necessity to apply this algorithm in a guided rather than a ballistic fashion. Furthermore, they highlight the need to examine the residual signals for those features of the signals that were not anticipated and not modeled in the derivation of the algorithm.

  6. How Social Ties Influence Consumer: Evidence from Event-Related Potentials.

    Science.gov (United States)

    Luan, Jing; Yao, Zhong; Bai, Yan

    2017-01-01

    A considerable amount of marketing research has reported that consumers are more saliently influenced by friends (strong social ties) than by acquaintances and strangers (weak social ties). To shed light on the neural and psychological processes underlying such phenomenon, in this study we designed an amended S1-S2 paradigm (product-[reviewer-review]) that is based on realistic consumer purchase experiences. After incoming all given information (product, reviewer, review), participants were required to state their purchase intentions. The neurocognitive and emotional processes related to friend and stranger stimuli were delineated to suggest how social ties influence consumers during their shopping processes. Larger P2 (fronto-central scalp areas) and P3 (central and posterior-parietal scalp areas) components under stranger condition were elicited successfully. These findings demonstrate that the cognitive and emotional processing of friend and stranger stimuli occurs at stages of neural activity, and can be indicated by the P2 and P3 components. Electrophysiological data also support the hypothesis that different neural and emotional processing magnitude and strength underlie friend and stranger effect in the context of consumer purchase. During this process, the perception of stimuli evoked P2, subsequently emotional processing and attention modulation were activated and indicated by P2 and P3. The friend dominated phenomenon can be interpreted as the result of distinctive neurocognitive and emotional processing magnitude, which suggests that psychological and emotional factors can guide consumer decision making. This study consolidates that event related potential (ERP) methodology is likely to be a more sensitive method for investigating consumer behaviors. From the perspectives of management and marketing, our findings show that the P2 and P3 components can be employed as an indicator to probe the influential factors of consumer purchase intentions.

  7. How Social Ties Influence Consumer: Evidence from Event-Related Potentials

    Science.gov (United States)

    Yao, Zhong

    2017-01-01

    A considerable amount of marketing research has reported that consumers are more saliently influenced by friends (strong social ties) than by acquaintances and strangers (weak social ties). To shed light on the neural and psychological processes underlying such phenomenon, in this study we designed an amended S1-S2 paradigm (product-[reviewer-review]) that is based on realistic consumer purchase experiences. After incoming all given information (product, reviewer, review), participants were required to state their purchase intentions. The neurocognitive and emotional processes related to friend and stranger stimuli were delineated to suggest how social ties influence consumers during their shopping processes. Larger P2 (fronto-central scalp areas) and P3 (central and posterior-parietal scalp areas) components under stranger condition were elicited successfully. These findings demonstrate that the cognitive and emotional processing of friend and stranger stimuli occurs at stages of neural activity, and can be indicated by the P2 and P3 components. Electrophysiological data also support the hypothesis that different neural and emotional processing magnitude and strength underlie friend and stranger effect in the context of consumer purchase. During this process, the perception of stimuli evoked P2, subsequently emotional processing and attention modulation were activated and indicated by P2 and P3. The friend dominated phenomenon can be interpreted as the result of distinctive neurocognitive and emotional processing magnitude, which suggests that psychological and emotional factors can guide consumer decision making. This study consolidates that event related potential (ERP) methodology is likely to be a more sensitive method for investigating consumer behaviors. From the perspectives of management and marketing, our findings show that the P2 and P3 components can be employed as an indicator to probe the influential factors of consumer purchase intentions. PMID

  8. Auditory-Visual Integration for Speech by Children with and without Specific Language Impairment

    Science.gov (United States)

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca; Boliek, Carol A.

    2007-01-01

    Purpose: It has long been known that children with specific language impairment (SLI) can demonstrate difficulty with auditory speech perception. However, speech perception can also involve the integration of both auditory and visual articulatory information. Method: Fifty-six preschool children, half with and half without SLI, were studied in…

  9. 中国精神分裂症患者的事件相关电位研究%Research in China on event-related potentials in patients with schizophrenia

    Institute of Scientific and Technical Information of China (English)

    王继军; 郭茜

    2012-01-01

    Event-related potentials (ERPs) are objective electrophysiological indicators that can be used to assess cognitive processes in the human brain. Psychiatric researchers in China have applied this method to study schizophrenia since the early 1980s. ERP measures used in the study of schizophrenia include contingent negative variation (CNV), P300, mismatch negativity (MMN), error-related negativity (ERN) and auditory P50 inhibition. This review summarizes the main findings of ERP research in patients with schizophrenia reported by Chinese investigators.

  10. Expressive timing facilitates the neural processing of phrase boundaries in music: evidence from event-related potentials.

    Directory of Open Access Journals (Sweden)

    Eva Istók

    Full Text Available The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP, we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.

  11. Expressive timing facilitates the neural processing of phrase boundaries in music: evidence from event-related potentials.

    Science.gov (United States)

    Istók, Eva; Friberg, Anders; Huotilainen, Minna; Tervaniemi, Mari

    2013-01-01

    The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.

  12. Assessing attention and cognitive function in completely locked-in state with event-related brain potentials and epidural electrocorticography

    Science.gov (United States)

    Bensch, Michael; Martens, Suzanne; Halder, Sebastian; Hill, Jeremy; Nijboer, Femke; Ramos, Ander; Birbaumer, Niels; Bogdan, Martin; Kotchoubey, Boris; Rosenstiel, Wolfgang; Schölkopf, Bernhard; Gharabaghi, Alireza

    2014-04-01

    Objective. Patients in the completely locked-in state (CLIS), due to, for example, amyotrophic lateral sclerosis (ALS), no longer possess voluntary muscle control. Assessing attention and cognitive function in these patients during the course of the disease is a challenging but essential task for both nursing staff and physicians. Approach. An electrophysiological cognition test battery, including auditory and semantic stimuli, was applied in a late-stage ALS patient at four different time points during a six-month epidural electrocorticography (ECoG) recording period. Event-related cortical potentials (ERP), together with changes in the ECoG signal spectrum, were recorded via 128 channels that partially covered the left frontal, temporal and parietal cortex. Main results. Auditory but not semantic stimuli induced significant and reproducible ERP projecting to specific temporal and parietal cortical areas. N1/P2 responses could be detected throughout the whole study period. The highest P3 ERP was measured immediately after the patient's last communication through voluntary muscle control, which was paralleled by low theta and high gamma spectral power. Three months after the patient's last communication, i.e., in the CLIS, P3 responses could no longer be detected. At the same time, increased activity in low-frequency bands and a sharp drop of gamma spectral power were recorded. Significance. Cortical electrophysiological measures indicate at least partially intact attention and cognitive function during sparse volitional motor control for communication. Although the P3 ERP and frequency-specific changes in the ECoG spectrum may serve as indicators for CLIS, a close-meshed monitoring will be required to define the exact time point of the transition.

  13. Retrieving self-vocalized information: An event-related potential (ERP) study on the effect of retrieval orientation.

    Science.gov (United States)

    Rosburg, Timm; Johansson, Mikael; Sprondel, Volker; Mecklinger, Axel

    2014-11-18

    Retrieval orientation refers to a pre-retrieval process and conceptualizes the specific form of processing that is applied to a retrieval cue. In the current event-related potential (ERP) study, we sought to find evidence for an involvement of the auditory cortex when subjects attempt to retrieve vocalized information, and hypothesized that adopting retrieval orientation would be beneficial for retrieval accuracy. During study, participants saw object words that they subsequently vocalized or visually imagined. At test, participants had to identify object names of one study condition as targets and to reject object names of the second condition together with new items. Target category switched after half of the test trials. Behaviorally, participants responded less accurately and more slowly to targets of the vocalize condition than to targets of the imagine condition. ERPs to new items varied at a single left electrode (T7) between 500 and 800ms, indicating a moderate retrieval orientation effect in the subject group as a whole. However, whereas the effect was strongly pronounced in participants with high retrieval accuracy, it was absent in participants with low retrieval accuracy. A current source density (CSD) mapping of the retrieval orientation effect indicated a source over left temporal regions. Independently from retrieval accuracy, the ERP retrieval orientation effect was surprisingly also modulated by test order. Findings are suggestive for an involvement of the auditory cortex in retrieval attempts of vocalized information and confirm that adopting retrieval orientation is potentially beneficial for retrieval accuracy. The effects of test order on retrieval-related processes might reflect a stronger focus on the newness of items in the more difficult test condition when participants started with this condition.

  14. Neural dynamics of audiovisual synchrony and asynchrony perception in 6-month-old infants

    Directory of Open Access Journals (Sweden)

    Franziska eKopp

    2013-01-01

    Full Text Available Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related potentials (ERP. In a prior behavioral experiment (n = 45, infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15, synchronous and asynchronous stimuli (visual delay of 400 ms were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants' ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations.

  15. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    Science.gov (United States)

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  16. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    Science.gov (United States)

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  17. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-related Potential Component

    Directory of Open Access Journals (Sweden)

    Hideaki Tanaka

    2016-09-01

    Full Text Available Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow and mouth (i.e., lipstick. Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth, as reflected by changes in N170 amplitude.

  18. Perceptual Plasticity for Auditory Object Recognition

    Directory of Open Access Journals (Sweden)

    Shannon L. M. Heald

    2017-05-01

    Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we

  19. Introducing the event related fixed interval area (ERFIA) multilevel technique: a method to analyze the complete epoch of event-related potentials at single trial level

    NARCIS (Netherlands)

    Vossen, C.J.; Vossen, H.G.M.; Marcus, M.A.E.; van Os, J.; Lousberg, R.

    2013-01-01

    In analyzing time-locked event-related potentials (ERPs), many studies have focused on specific peaks and their differences between experimental conditions. In theory, each latency point after a stimulus contains potentially meaningful information, regardless of whether it is peak-related. Based on

  20. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    Science.gov (United States)

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  1. Perception of acoustically complex phonological features in vowels is reflected in the induced brain-magnetic activity

    Directory of Open Access Journals (Sweden)

    Obleser Jonas

    2007-06-01

    Full Text Available Abstract A central issue in speech recognition is which basic units of speech are extracted by the auditory system and used for lexical access. One suggestion is that complex acoustic-phonetic information is mapped onto abstract phonological representations of speech and that a finite set of phonological features is used to guide speech perception. Previous studies analyzing the N1m component of the auditory evoked field have shown that this holds for the acoustically simple feature place of articulation. Brain magnetic correlates indexing the extraction of acoustically more complex features, such as lip rounding (ROUND in vowels, have not been unraveled yet. The present study uses magnetoencephalography (MEG to describe the spatial-temporal neural dynamics underlying the extraction of phonological features. We examined the induced electromagnetic brain response to German vowels and found the event-related desynchronization in the upper beta-band to be prolonged for those vowels that exhibit the lip rounding feature (ROUND. It was the presence of that feature rather than circumscribed single acoustic parameters, such as their formant frequencies, which explained the differences between the experimental conditions. We conclude that the prolonged event-related desynchronization in the upper beta-band correlates with the computational effort for the extraction of acoustically complex phonological features from the speech signal. The results provide an additional biomagnetic parameter to study mechanisms of speech perception.

  2. Cerebral Responses to Vocal Attractiveness and Auditory Hallucinations in Schizophrenia: A Functional MRI Study

    Directory of Open Access Journals (Sweden)

    Michihiko eKoeda

    2013-05-01

    Full Text Available Impaired self-monitoring and abnormalities of cognitive bias have been implicated as cognitive mechanisms of hallucination; regions fundamental to these processes including inferior frontal gyrus (IFG and superior temporal gyrus (STG are abnormally activated in individuals that hallucinate. A recent study showed activation in IFG-STG to be modulated by auditory attractiveness, but no study has investigated whether these IFG-STG activations are impaired in schizophrenia. We aimed to clarify the cerebral function underlying the perception of auditory attractiveness in schizophrenia patients. Cerebral activation was examined in 18 schizophrenia patients and 18 controls when performing Favourability Judgment Task (FJT and Gender Differentiation Task (GDT for pairs of greetings using event-related functional MRI. A full-factorial analysis revealed that the main effect of task was associated with activation of left IFG and STG. The main effect of Group revealed less activation of left STG in schizophrenia compared with controls, whereas significantly greater activation in schizophrenia than in controls was revealed at the left middle frontal gyrus (MFG, right temporo-parietal junction (TPJ, right occipital lobe, and right amygdala (p<0.05, FDR-corrected. A significant positive correlation was observed at the right TPJ and right MFG between cerebral activation under FJT minus GDT contrast and the score of hallucinatory behaviour on the Positive and Negative Symptom Scale. Findings of hypo-activation in the left STG could designate brain dysfunction in accessing vocal attractiveness in schizophrenia, whereas hyper-activation in the right TPJ and MFG may reflect the process of mentalizing other person’s behaviour by auditory hallucination by abnormality of cognitive bias.

  3. Alterations in neural processing of emotional faces in adolescent anorexia nervosa patients - an event-related potential study.

    Science.gov (United States)

    Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Bartling, Jürgen; Schulte-Körne, Gerd; Dieler, Alica C

    2016-09-01

    The present study explored the neurophysiological correlates of perception and recognition of emotional facial expressions in adolescent anorexia nervosa (AN) patients using event-related potentials (ERPs). We included 20 adolescent girls with AN and 24 healthy girls and recorded ERPs during a passive viewing task and three active tasks requiring processing of emotional faces in varying processing depths; one of the tasks also assessed emotion recognition abilities behaviourally. Despite the absence of behavioural differences, we found that across all tasks AN patients exhibited a less pronounced early posterior negativity (EPN) in response to all facial expressions compared to controls. The EPN is an ERP component reflecting an automatic, perceptual processing stage which is modulated by the intrinsic salience of a stimulus. Hence, the less pronounced EPN in anorexic girls suggests that they might perceive other people's faces as less intrinsically relevant, i.e. as less "important" than do healthy girls. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Discrimination of fearful and happy body postures in 8-month-old infants: An event-related potential study

    Directory of Open Access Journals (Sweden)

    Manuela eMissana

    2014-07-01

    Full Text Available Responding to others’ emotional body expressions is an essential social skill in humans. Adults readily detect emotions from body postures, but it is unclear whether infants are sensitive to emotional body postures. We examined 8-month-old infants’ brain responses to emotional body postures by measuring event-related potentials (ERPs to happy and fearful bodies. Our results revealed two emotion-sensitive ERP components: body postures evoked an early N290 at occipital electrodes and a later Nc at fronto-central electrodes that were enhanced in response to fearful (relative to happy expressions. These findings demonstrate that, (a 8-month-old infants discriminate between static emotional body postures, and (b similar to infant emotional face perception, the sensitivity to emotional body postures is reflected in early perceptual (N290 and later attentional (Nc neural processes. This provides evidence for an early developmental emergence of the neural processes involved in the discrimination of emotional body postures.

  5. Cognitive mechanisms associated with auditory sensory gating.

    Science.gov (United States)

    Jones, L A; Hills, P J; Dick, K M; Jones, S P; Bright, P

    2016-02-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification.

  6. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    Science.gov (United States)

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Event-related potentials reflect the efficacy of pharmaceutical treatments in children and adolescents with attention deficit/hyperactivity disorder.

    Science.gov (United States)

    Yamamuro, Kazuhiko; Ota, Toyosaku; Iida, Junzo; Nakanishi, Yoko; Matsuura, Hiroki; Uratani, Mitsuhiro; Okazaki, Kosuke; Kishimoto, Naoko; Tanaka, Shohei; Kishimoto, Toshifumi

    2016-08-30

    Few objective biological measures of pharmacological treatment efficacy exist for attention deficit/hyperactivity disorder (ADHD). Although we have previously demonstrated that event-related potentials (ERPs) reflect the effects of osmotic-release methylphenidate in treatment of naïve pediatric patients with ADHD, whether this is true for the therapeutic effects of atomoxetine (ATX) is unknown. Here, we used the Japanese version of the ADHD rating-scale IV to evaluate 14 patients with ADHD, and compared their ERP data with 14 age- and sex-matched controls. We measured P300 and mismatch negativity (MMN) components during an auditory oddball task before treatment (treatment naïve) and after 2 months of ATX treatment. Compared with controls, P300 components at baseline were attenuated and prolonged in the ADHD group at Fz (fronto-central), Cz (centro-parietal), Pz (parietal regions), C3 and C4 electrodes. ATX treatment reduced ADHD symptomology, and after 2 months of treatment, P300 latencies at Fz, Cz, Pz, C3, and C4 electrodes were significantly shorter than those at baseline. Moreover, MMN amplitudes at Cz and C3 electrodes were significantly greater than those at baseline. Thus, ERPs may be useful for evaluating the pharmacological effects of ATX in pediatric and adolescent patients with ADHD.

  8. Event-related potentials in drug-naïve pediatric patients with obsessive-compulsive disorder.

    Science.gov (United States)

    Yamamuro, Kazuhiko; Ota, Toyosaku; Nakanishi, Yoko; Matsuura, Hiroki; Okazaki, Kosuke; Kishimoto, Naoko; Takahashi, Hiroyuki; Iwasaka, Hidemi; Iida, Junzo; Kishimoto, Toshifumi

    2015-12-15

    Obsessive-compulsive disorder (OCD) is one of the most common mental health disorders, characterized by obsessive thoughts and/or compulsive behaviors, which may involve specific disorders in cognition and/or information processing. Event-related potentials (ERPs) are commonly used as physiological measures of cognitive function as they are easily measured and noninvasive. In the present study, 20 drug-naïve pediatric patients with OCD were compared with 20 healthy control participants who were age- and sex-matched to perform the ERP. Based on the guidelines for evoked potential measurement, the P300 and mismatch negativity (MMN) were obtained by auditory odd-ball tasks. We found that the amplitudes of the P300 components in the Fz, Cz, Pz, C3, and C4 regions were significantly smaller in the OCD group compared with the control group. There were no between-group differences in P300 latency, MMN amplitude, or MMN latency. Moreover, we found significant correlations between scores on the Children's Yale-Brown Obsessive-Compulsive Scale (CY-BOCS) and P300 amplitudes at Cz, Pz, and C3. The present study is the first to report smaller P300s and the associations between P300 abnormalities and CY-BOCS scores.

  9. Recording event-related activity under hostile magnetic resonance environment: Is multimodal EEG/ERP-MRI recording possible?

    Science.gov (United States)

    Karakaş, H M; Karakaş, S; Ozkan Ceylan, A; Tali, E T

    2009-08-01

    Event-related potentials (ERPs) have high temporal resolution, but insufficient spatial resolution; the converse is true for the functional imaging techniques. The purpose of the study was to test the utility of a multimodal EEG/ERP-MRI technique which combines electroencephalography (EEG) and magnetic resonance imaging (MRI) for a simultaneously high temporal and spatial resolution. The sample consisted of 32 healthy young adults of both sexes. Auditory stimuli were delivered according to the active and passive oddball paradigms in the MRI environment (MRI-e) and in the standard conditions of the electrophysiology laboratory environment (Lab-e). Tasks were presented in a fixed order. Participants were exposed to the recording environments in a counterbalanced order. EEG data were preprocessed for MRI-related artifacts. Source localization was made using a current density reconstruction technique. The ERP waveforms for the MRI-e were morphologically similar to those for the Lab-e. The effect of the recording environment, experimental paradigm and electrode location were analyzed using a 2x2x3 analysis of variance for repeated measures. The ERP components in the two environments showed parametric variations and characteristic topographical distributions. The calculated sources were in line with the related literature. The findings indicated effortful cognitive processing in MRI-e. The study provided preliminary data on the feasibility of the multimodal EEG/ERP-MRI technique. It also indicated lines of research that are to be pursued for a decisive testing of this technique and its implementation to clinical practice.

  10. Differences in Cortical Sources of the Event-Related P3 Potential Between Young and Old Participants Indicate Frontal Compensation.

    Science.gov (United States)

    van Dinteren, R; Huster, R J; Jongsma, M L A; Kessels, R P C; Arns, M

    2017-01-18

    The event-related P3 potential, as elicited in auditory signal detection tasks, originates from neural activity of multiple cortical structures and presumably reflects an overlap of several cognitive processes. The fact that the P3 is affected by aging makes it a potential metric for age-related cognitive change. The P3 in older participants is thought to encompass frontal compensatory activity in addition to task-related processes. The current study investigates this by decomposing the P3 using group independent component analysis (ICA). Independent components (IC) of young and old participants were compared in order to investigate the effects of aging. Exact low-resolution tomography analysis (eLORETA) was used to compare current source densities between young and old participants for the P3-ICs to localize differences in cortical source activity for every IC. One of the P3-related ICs reflected a different constellation of cortical generators in older participants compared to younger participants, suggesting that this P3-IC reflects shifts in neural activations and compensatory processes with aging. This P3-IC was localized to the orbitofrontal/temporal, and the medio-parietal regions. For this IC, older participants showed more frontal activation and less parietal activation as measured on the scalp. The differences in cortical sources were localized in the precentral gyrus and the parahippocampal gyrus. This finding might reflect compensatory activity recruited from these cortical sources during a signal detection task.

  11. Genetic correlates of the development of theta event related oscillations in adolescents and young adults.

    Science.gov (United States)

    Chorlian, David B; Rangaswamy, Madhavi; Manz, Niklas; Meyers, Jacquelyn L; Kang, Sun J; Kamarajan, Chella; Pandey, Ashwini K; Wang, Jen-Chyong; Wetherill, Leah; Edenberg, Howard; Porjesz, Bernice

    2016-11-12

    The developmental trajectories of theta band (4-7Hz) event-related oscillations (EROs), a key neurophysiological constituent of the P3 response, were assessed in 2170 adolescents and young adults ages 12 to 25. The theta EROs occurring in the P3 response, important indicators of neurocognitive function, were elicited during the evaluation of task-relevant target stimuli in visual and auditory oddball tasks. Associations between the theta EROs and genotypic variants of 4 KCNJ6 single nucleotide polymorphisms (SNPs) were found to vary with age, sex, scalp location, and task modality. Three of the four KCNJ6 SNPs studied here were found to be significantly associated with the same theta EROs in adults in a previous family genome wide association study. Sin