WorldWideScience

Sample records for auditory brain regions

  1. Auditory motion in the sighted and blind: Early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions.

    Science.gov (United States)

    Dormal, Giulia; Rezk, Mohamed; Yakobov, Esther; Lepore, Franco; Collignon, Olivier

    2016-07-01

    How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  3. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  4. Brain Region-Specific Activity Patterns after Recent or Remote Memory Retrieval of Auditory Conditioned Fear

    Science.gov (United States)

    Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee

    2012-01-01

    Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…

  5. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    Directory of Open Access Journals (Sweden)

    Mark eLaing

    2015-10-01

    Full Text Available The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we use amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only or auditory-visual (AV trials in the scanner. On AV trials, the auditory and visual signal could have the same (AV congruent or different modulation rates (AV incongruent. Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for auditory-visual integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  6. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    Science.gov (United States)

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  7. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  8. Fast learning of simple perceptual discriminations reduces brain activation in working memory and in high-level auditory regions.

    Science.gov (United States)

    Daikhin, Luba; Ahissar, Merav

    2015-07-01

    Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.

  9. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Directory of Open Access Journals (Sweden)

    Jiagui Qu

    Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  10. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Science.gov (United States)

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  11. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Brain regions for sound processing and song release in a small grasshopper.

    Science.gov (United States)

    Balvantray Bhavsar, Mit; Stumpner, Andreas; Heinrich, Ralf

    2017-05-01

    We investigated brain regions - mostly neuropils - that process auditory information relevant for the initiation of response songs of female grasshoppers Chorthippus biguttulus during bidirectional intraspecific acoustic communication. Male-female acoustic duets in the species Ch. biguttulus require the perception of sounds, their recognition as a species- and gender-specific signal and the initiation of commands that activate thoracic pattern generating circuits to drive the sound-producing stridulatory movements of the hind legs. To study sensory-to-motor processing during acoustic communication we used multielectrodes that allowed simultaneous recordings of acoustically stimulated electrical activity from several ascending auditory interneurons or local brain neurons and subsequent electrical stimulation of the recording site. Auditory activity was detected in the lateral protocerebrum (where most of the described ascending auditory interneurons terminate), in the superior medial protocerebrum and in the central complex, that has previously been implicated in the control of sound production. Neural responses to behaviorally attractive sound stimuli showed no or only poor correlation with behavioral responses. Current injections into the lateral protocerebrum, the central complex and the deuto-/tritocerebrum (close to the cerebro-cervical fascicles), but not into the superior medial protocerebrum, elicited species-typical stridulation with high success rate. Latencies and numbers of phrases produced by electrical stimulation were different between these brain regions. Our results indicate three brain regions (likely neuropils) where auditory activity can be detected with two of these regions being potentially involved in song initiation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation.

    Science.gov (United States)

    Yakunina, Natalia; Kang, Eun Kyoung; Kim, Tae Su; Min, Ji-Hoon; Kim, Sam Soo; Nam, Eui-Cheol

    2015-10-01

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads.

  14. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Yakunina, Natalia [Kangwon National University, Institute of Medical Science, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kang, Eun Kyoung [Kangwon National University Hospital, Department of Rehabilitation Medicine, Chuncheon (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of); Min, Ji-Hoon [University of Michigan, Department of Biopsychology, Cognition, and Neuroscience, Ann Arbor, MI (United States); Kim, Sam Soo [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Radiology, Chuncheon (Korea, Republic of); Nam, Eui-Cheol [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of)

    2015-10-15

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  15. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    International Nuclear Information System (INIS)

    Yakunina, Natalia; Kang, Eun Kyoung; Kim, Tae Su; Min, Ji-Hoon; Kim, Sam Soo; Nam, Eui-Cheol

    2015-01-01

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  16. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    OpenAIRE

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants wer...

  17. Opposite brain laterality in analogous auditory and visual tests.

    Science.gov (United States)

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  18. Alterations in regional homogeneity of resting-state brain activity in internet gaming addicts

    Directory of Open Access Journals (Sweden)

    Dong Guangheng

    2012-08-01

    Full Text Available Abstract Backgrounds Internet gaming addiction (IGA, as a subtype of internet addiction disorder, is rapidly becoming a prevalent mental health concern around the world. The neurobiological underpinnings of IGA should be studied to unravel the potential heterogeneity of IGA. This study investigated the brain functions in IGA patients with resting-state fMRI. Methods Fifteen IGA subjects and fourteen healthy controls participated in this study. Regional homogeneity (ReHo measures were used to detect the abnormal functional integrations. Results Comparing to the healthy controls, IGA subjects show enhanced ReHo in brainstem, inferior parietal lobule, left posterior cerebellum, and left middle frontal gyrus. All of these regions are thought related with sensory-motor coordination. In addition, IGA subjects show decreased ReHo in temporal, occipital and parietal brain regions. These regions are thought responsible for visual and auditory functions. Conclusions Our results suggest that long-time online game playing enhanced the brain synchronization in sensory-motor coordination related brain regions and decreased the excitability in visual and auditory related brain regions.

  19. Diffusion tractography of the subcortical auditory system in a postmortem human brain

    OpenAIRE

    Sitek, Kevin

    2017-01-01

    The subcortical auditory system is challenging to identify with standard human brain imaging techniques: MRI signal decreases toward the center of the brain as well as at higher resolution, both of which are necessary for imaging small brainstem auditory structures.Using high-resolution diffusion-weighted MRI, we asked:Can we identify auditory structures and connections in high-resolution ex vivo images?Which structures and connections can be mapped in vivo?

  20. Localized brain activation related to the strength of auditory learning in a parrot.

    Directory of Open Access Journals (Sweden)

    Hiroko Eda-Fujiwara

    Full Text Available Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus, a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  1. Auditory motion-specific mechanisms in the primate brain.

    Directory of Open Access Journals (Sweden)

    Colline Poirier

    2017-05-01

    Full Text Available This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI. We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.

  2. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  3. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  4. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  5. Auditory Brain Stem Processing in Reptiles and Amphibians: Roles of Coupled Ears

    DEFF Research Database (Denmark)

    Willis, Katie L.; Christensen-Dalsgaard, Jakob; Carr, Catherine

    2014-01-01

    Comparative approaches to the auditory system have yielded great insight into the evolution of sound localization circuits, particularly within the nonmammalian tetrapods. The fossil record demonstrates multiple appearances of tympanic hearing, and examination of the auditory brain stem of various...... groups can reveal the organizing effects of the ear across taxa. If the peripheral structures have a strongly organizing influence on the neural structures, then homologous neural structures should be observed only in groups with a homologous tympanic ear. Therefore, the central auditory systems...... of anurans (frogs), reptiles (including birds), and mammals should all be more similar within each group than among the groups. Although there is large variation in the peripheral auditory system, there is evidence that auditory brain stem nuclei in tetrapods are homologous and have similar functions among...

  6. Brain networks underlying mental imagery of auditory and visual information.

    Science.gov (United States)

    Zvyagintsev, Mikhail; Clemens, Benjamin; Chechko, Natalya; Mathiak, Krystyna A; Sack, Alexander T; Mathiak, Klaus

    2013-05-01

    Mental imagery is a complex cognitive process that resembles the experience of perceiving an object when this object is not physically present to the senses. It has been shown that, depending on the sensory nature of the object, mental imagery also involves correspondent sensory neural mechanisms. However, it remains unclear which areas of the brain subserve supramodal imagery processes that are independent of the object modality, and which brain areas are involved in modality-specific imagery processes. Here, we conducted a functional magnetic resonance imaging study to reveal supramodal and modality-specific networks of mental imagery for auditory and visual information. A common supramodal brain network independent of imagery modality, two separate modality-specific networks for imagery of auditory and visual information, and a common deactivation network were identified. The supramodal network included brain areas related to attention, memory retrieval, motor preparation and semantic processing, as well as areas considered to be part of the default-mode network and multisensory integration areas. The modality-specific networks comprised brain areas involved in processing of respective modality-specific sensory information. Interestingly, we found that imagery of auditory information led to a relative deactivation within the modality-specific areas for visual imagery, and vice versa. In addition, mental imagery of both auditory and visual information widely suppressed the activity of primary sensory and motor areas, for example deactivation network. These findings have important implications for understanding the mechanisms that are involved in generation of mental imagery. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  8. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  9. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  10. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  11. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    Science.gov (United States)

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Conductive Hearing Loss during Infancy: Effects on Later Auditory Brain Stem Electrophysiology.

    Science.gov (United States)

    Gunnarson, Adele D.; Finitzo, Terese

    1991-01-01

    Long-term effects on auditory electrophysiology from early fluctuating hearing loss were studied in 27 children, aged 5 to 7 years, who had been evaluated originally in infancy. Findings suggested that early fluctuating hearing loss disrupts later auditory brain stem electrophysiology. (Author/DB)

  13. Changes in regional cerebral blood flow during auditory cognitive tasks

    International Nuclear Information System (INIS)

    Ohyama, Masashi; Kitamura, Shin; Terashi, Akiro; Senda, Michio.

    1993-01-01

    In order to investigate the relation between auditory cognitive function and regional brain activation, we measured the changes in the regional cerebral blood flow (CBF) using positron emission tomography (PET) during the 'odd-ball' paradigm in ten normal healthy volunteers. The subjects underwent 3 tasks, twice for each, while the evoked potential was recorded. In these tasks, the auditory stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB from the earphones. Task A: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to only hear. Task B: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to push the button on detecting a tone. Task C: the stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB with a frequency of 1000 Hz (non-target) in 80% and 2000 Hz (target) in 20% at random, and the subject was instructed to push the button on detecting a target tone. The event related potential (P300) was observed in task C (Pz: 334.3±19.6 msec). At each task, the CBF was measured using PET with i.v. injection of 1.5 GBq of O-15 water. The changes in CBF associated with auditory cognition was evaluated by the difference between the CBF images in task C and B. Localized increase was observed in the anterior cingulate cortex (in all subjects), the bilateral associate auditory cortex, the prefrontal cortex and the parietal cortex. The latter three areas had a large individual variation in the location of foci. These results suggested the role of those cortical areas in auditory cognition. The anterior cingulate was most activated (15.0±2.24% of global CBF). This region was not activated in the condition of task B minus task A. The anterior cingulate is a part of Papez's circuit that is related to memory and other higher cortical function. These results suggested that this area may play an important role in cognition as well as in attention. (author)

  14. Early auditory processing in area V5/MT+ of the congenitally blind brain.

    Science.gov (United States)

    Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly

    2013-11-13

    Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.

  15. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...

  16. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    Science.gov (United States)

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  17. Specific Regional and Age-Related Small Noncoding RNA Expression Patterns Within Superior Temporal Gyrus of Typical Human Brains Are Less Distinct in Autism Brains.

    Science.gov (United States)

    Stamova, Boryana; Ander, Bradley P; Barger, Nicole; Sharp, Frank R; Schumann, Cynthia M

    2015-12-01

    Small noncoding RNAs play a critical role in regulating messenger RNA throughout brain development and when altered could have profound effects leading to disorders such as autism spectrum disorders (ASD). We assessed small noncoding RNAs, including microRNA and small nucleolar RNA, in superior temporal sulcus association cortex and primary auditory cortex in typical and ASD brains from early childhood to adulthood. Typical small noncoding RNA expression profiles were less distinct in ASD, both between regions and changes with age. Typical micro-RNA coexpression associations were absent in ASD brains. miR-132, miR-103, and miR-320 micro-RNAs were dysregulated in ASD and have previously been associated with autism spectrum disorders. These diminished region- and age-related micro-RNA expression profiles are in line with previously reported findings of attenuated messenger RNA and long noncoding RNA in ASD brain. This study demonstrates alterations in superior temporal sulcus in ASD, a region implicated in social impairment, and is the first to demonstrate molecular alterations in the primary auditory cortex. © The Author(s) 2015.

  18. Multielectrode recordings from auditory neurons in the brain of a small grasshopper.

    Science.gov (United States)

    Bhavsar, Mit Balvantray; Heinrich, Ralf; Stumpner, Andreas

    2015-12-30

    Grasshoppers have been used as a model system to study the neuronal basis of insect acoustic behavior. Auditory neurons have been described from intracellular recordings. The growing interest to study population activity of neurons has been satisfied so far with artificially combining data from different individuals. We for the first time used multielectrode recordings from a small grasshopper brain. We used three 12μm tungsten wires (combined in a multielectrode) to record from local brain neurons and from a population of auditory neurons entering the brain from the thorax. Spikes of the recorded units were separated by sorting algorithms and spike collision analysis. The tungsten wires enabled stable recordings with high signal to noise ratio. Due to the tight temporal coupling of auditory activity to the stimulus spike collisions were frequent and collision analysis retrieved 10-15% of additional spikes. Marking the electrode position was possible using a fluorescent dye or electrocoagulation with high current. Physiological identification of units described from intracellular recordings was hard to achieve. 12μm tungsten wires gave a better signal to noise ratio than 15μm copper wires previously used in recordings from bees' brains. Recording the population activity of auditory neurons in one individual prevents interindividual and trial-to-trial variability which otherwise reduce the validity of the analysis. Double intracellular recordings have quite low success rate and therefore are rarely achieved and their stability is much lower than that of multielectrode recordings which allows sampling of data for 30min or more. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Transcriptional maturation of the mouse auditory forebrain.

    Science.gov (United States)

    Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly

    2015-08-14

    The maturation of the brain involves the coordinated expression of thousands of genes, proteins and regulatory elements over time. In sensory pathways, gene expression profiles are modified by age and sensory experience in a manner that differs between brain regions and cell types. In the auditory system of altricial animals, neuronal activity increases markedly after the opening of the ear canals, initiating events that culminate in the maturation of auditory circuitry in the brain. This window provides a unique opportunity to study how gene expression patterns are modified by the onset of sensory experience through maturity. As a tool for capturing these features, next-generation sequencing of total RNA (RNAseq) has tremendous utility, because the entire transcriptome can be screened to index expression of any gene. To date, whole transcriptome profiles have not been generated for any central auditory structure in any species at any age. In the present study, RNAseq was used to profile two regions of the mouse auditory forebrain (A1, primary auditory cortex; MG, medial geniculate) at key stages of postnatal development (P7, P14, P21, adult) before and after the onset of hearing (~P12). Hierarchical clustering, differential expression, and functional geneset enrichment analyses (GSEA) were used to profile the expression patterns of all genes. Selected genesets related to neurotransmission, developmental plasticity, critical periods and brain structure were highlighted. An accessible repository of the entire dataset was also constructed that permits extraction and screening of all data from the global through single-gene levels. To our knowledge, this is the first whole transcriptome sequencing study of the forebrain of any mammalian sensory system. Although the data are most relevant for the auditory system, they are generally applicable to forebrain structures in the visual and somatosensory systems, as well. The main findings were: (1) Global gene expression

  20. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    Science.gov (United States)

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Incorporating modern neuroscience findings to improve brain-computer interfaces: tracking auditory attention.

    Science.gov (United States)

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc

    2016-10-01

    Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.

  2. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    Science.gov (United States)

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Differentiating functional brain regions using optical coherence tomography (Conference Presentation)

    Science.gov (United States)

    Gil, Daniel A.; Bow, Hansen C.; Shen, Jin-H.; Joos, Karen M.; Skala, Melissa C.

    2017-02-01

    The human brain is made up of functional regions governing movement, sensation, language, and cognition. Unintentional injury during neurosurgery can result in significant neurological deficits and morbidity. The current standard for localizing function to brain tissue during surgery, intraoperative electrical stimulation or recording, significantly increases the risk, time, and cost of the procedure. There is a need for a fast, cost-effective, and high-resolution intraoperative technique that can avoid damage to functional brain regions. We propose that optical coherence tomography (OCT) can fill this niche by imaging differences in the cellular composition and organization of functional brain areas. We hypothesized this would manifest as differences in the attenuation coefficient measured using OCT. Five functional regions (prefrontal, somatosensory, auditory, visual, and cerebellum) were imaged in ex vivo porcine brains (n=3), a model chosen due to a similar white/gray matter ratio as human brains. The attenuation coefficient was calculated using a depth-resolved model and quantitatively validated with Intralipid phantoms across a physiological range of attenuation coefficients (absolute difference Nissl-stained histology will be used to validate our results and correlate OCT-measured attenuation coefficients to neuronal density. Additional development and validation of OCT algorithms to discriminate brain regions are planned to improve the safety and efficacy of neurosurgical procedures such as biopsy, electrode placement, and tissue resection.

  5. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  6. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  7. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  8. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  9. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    Science.gov (United States)

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. [Forensic application of brainstem auditory evoked potential in patients with brain concussion].

    Science.gov (United States)

    Zheng, Xing-Bin; Li, Sheng-Yan; Huang, Si-Xing; Ma, Ke-Xin

    2008-12-01

    To investigate changes of brainstem auditory evoked potential (BAEP) in patients with brain concussion. Nineteen patients with brain concussion were studied with BAEP examination. The data was compared to the healthy persons reported in literatures. The abnormal rate of BAEP for patients with brain concussion was 89.5%. There was a statistically significant difference between the abnormal rate of patients and that of healthy persons (Pconcussion was 73.7%, indicating dysfunction of the brainstem in those patients. BAEP might be helpful in forensic diagnosis of brain concussion.

  11. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    Science.gov (United States)

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  12. Brain stem auditory evoked responses in chronic alcoholics.

    OpenAIRE

    Chan, Y W; McLeod, J G; Tuck, R R; Feary, P A

    1985-01-01

    Brain stem auditory evoked responses (BAERs) were performed on 25 alcoholic patients with Wernicke-Korsakoff syndrome, 56 alcoholic patients without Wernicke-Korsakoff syndrome, 24 of whom had cerebellar ataxia, and 37 control subjects. Abnormal BAERs were found in 48% of patients with Wernicke-Korsakoff syndrome, in 25% of alcoholic patients without Wernicke-Korsakoff syndrome but with cerebellar ataxia, and in 13% of alcoholic patients without Wernicke-Korsakoff syndrome or ataxia. The mean...

  13. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  14. Maturation of the auditory t-complex brain response across adolescence.

    Science.gov (United States)

    Mahajan, Yatin; McArthur, Genevieve

    2013-02-01

    Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  15. Subthalamic deep brain stimulation improves auditory sensory gating deficit in Parkinson's disease.

    Science.gov (United States)

    Gulberti, A; Hamel, W; Buhmann, C; Boelmans, K; Zittel, S; Gerloff, C; Westphal, M; Engel, A K; Schneider, T R; Moll, C K E

    2015-03-01

    While motor effects of dopaminergic medication and subthalamic nucleus deep brain stimulation (STN-DBS) in Parkinson's disease (PD) patients are well explored, their effects on sensory processing are less well understood. Here, we studied the impact of levodopa and STN-DBS on auditory processing. Rhythmic auditory stimulation (RAS) was presented at frequencies between 1 and 6Hz in a passive listening paradigm. High-density EEG-recordings were obtained before (levodopa ON/OFF) and 5months following STN-surgery (ON/OFF STN-DBS). We compared auditory evoked potentials (AEPs) elicited by RAS in 12 PD patients to those in age-matched controls. Tempo-dependent amplitude suppression of the auditory P1/N1-complex was used as an indicator of auditory gating. Parkinsonian patients showed significantly larger AEP-amplitudes (P1, N1) and longer AEP-latencies (N1) compared to controls. Neither interruption of dopaminergic medication nor of STN-DBS had an immediate effect on these AEPs. However, chronic STN-DBS had a significant effect on abnormal auditory gating characteristics of parkinsonian patients and restored a physiological P1/N1-amplitude attenuation profile in response to RAS with increasing stimulus rates. This differential treatment effect suggests a divergent mode of action of levodopa and STN-DBS on auditory processing. STN-DBS may improve early attentive filtering processes of redundant auditory stimuli, possibly at the level of the frontal cortex. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Effects of training and motivation on auditory P300 brain-computer interface performance.

    Science.gov (United States)

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Simultaneous recording of fluorescence and electrical signals by photometric patch electrode in deep brain regions in vivo.

    Science.gov (United States)

    Hirai, Yasuharu; Nishino, Eri; Ohmori, Harunori

    2015-06-01

    Despite its widespread use, high-resolution imaging with multiphoton microscopy to record neuronal signals in vivo is limited to the surface of brain tissue because of limited light penetration. Moreover, most imaging studies do not simultaneously record electrical neural activity, which is, however, crucial to understanding brain function. Accordingly, we developed a photometric patch electrode (PME) to overcome the depth limitation of optical measurements and also enable the simultaneous recording of neural electrical responses in deep brain regions. The PME recoding system uses a patch electrode to excite a fluorescent dye and to measure the fluorescence signal as a light guide, to record electrical signal, and to apply chemicals to the recorded cells locally. The optical signal was analyzed by either a spectrometer of high light sensitivity or a photomultiplier tube depending on the kinetics of the responses. We used the PME in Oregon Green BAPTA-1 AM-loaded avian auditory nuclei in vivo to monitor calcium signals and electrical responses. We demonstrated distinct response patterns in three different nuclei of the ascending auditory pathway. On acoustic stimulation, a robust calcium fluorescence response occurred in auditory cortex (field L) neurons that outlasted the electrical response. In the auditory midbrain (inferior colliculus), both responses were transient. In the brain-stem cochlear nucleus magnocellularis, calcium response seemed to be effectively suppressed by the activity of metabotropic glutamate receptors. In conclusion, the PME provides a powerful tool to study brain function in vivo at a tissue depth inaccessible to conventional imaging devices. Copyright © 2015 the American Physiological Society.

  18. The brain stem function in patients with brain bladder; Clinical evaluation using dynamic CT scan and auditory brainstem response

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Toshihiro (Yokohama City Univ. (Japan). Faculty of Medicine)

    1990-11-01

    A syndrome of detrusor-sphincter dyssynergia (DSD) is occasionally found in patients with brain bladder. To evaluate the brain stem function in cases of brain bladder, urodynamic study, dynamic CT scan of the brain stem (DCT) and auditory brainstem response (ABR) were performed. The region of interest of DCT aimed at the posterolateral portion of the pons. The results were analysed in contrast with the presense of DSD in urodynamic study. DCT studies were performed in 13 cases with various brain diseases and 5 control cases without neurological diseases. Abnormal patterns of the time-density curve consisted of low peak value, prolongation of filling time and low rapid washout ratio (low clearance ratio) of the contrast medium. Four of 6 cases with DSD showed at least one of the abnormal patterns of the time-density curve bilaterally. In 7 cases without DSD none showed bilateral abnormality of the curve and in 2 of 7 cases only unilateral abnormality was found. ABR was performed in 8 patients with brain diseases. The interpeak latency of the wave I-V (I-V IPL) was considered to be prolonged in 2 cases with DSD compared to that of 4 without DSD. In 2 cases with DSD who had normal DCT findings, measurement of the I-V IPL was impossible due to abnormal pattern of the ABR wave. Above mentioned results suggests the presence of functional disturbance at the posterolateral portion of the pons in cases of brain bladder with DSD. (author).

  19. High-density EEG characterization of brain responses to auditory rhythmic stimuli during wakefulness and NREM sleep.

    Science.gov (United States)

    Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio

    2018-04-01

    Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    Science.gov (United States)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  1. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  2. Correlation of auditory brain stem response and the MRI measurements in neuro-degenerative disorders

    International Nuclear Information System (INIS)

    Kamei, Hidekazu

    1989-01-01

    The purpose of this study is to elucidate correlations of several MRI measurements of the cranium and brain, functioning as a volume conductor, to the auditory brain stem response (ABR) in neuro-degenerative disorders. The subjects included forty-seven patients with spinocerebellar degeneration (SCD) and sixteen of amyotrophic lateral sclerosis (ALS). Statistically significant positive correlations were found between I-V and III-V interpeak latencies (IPLs) and the area of cranium and brain in the longitudinal section of SCD patients, and between I-III and III-V IPLs and the area in the longitudinal section of those with ALS. And, also there were statistically significant correlations between the amplitude of the V wave and the area of brain stem as well as that of the cranium in the longitudinal section of SCD patients, and between the amplitude of the V wave and the area of the cerebrum in the longitudinal section of ALS. In conclusion, in the ABR, the IPLs were prolonged and the amplitude of the V wave was decreased while the MRI size of the cranium and brain increased. When the ABR is applied to neuro-degenerative disorders, it might be important to consider not only the conduction of the auditory tracts in the brain stem, but also the correlations of the size of the cranium and brain which act as a volume conductor. (author)

  3. Correlation of auditory brain stem response and the MRI measurements in neuro-degenerative disorders

    Energy Technology Data Exchange (ETDEWEB)

    Kamei, Hidekazu (Tokyo Women' s Medical Coll. (Japan))

    1989-06-01

    The purpose of this study is to elucidate correlations of several MRI measurements of the cranium and brain, functioning as a volume conductor, to the auditory brain stem response (ABR) in neuro-degenerative disorders. The subjects included forty-seven patients with spinocerebellar degeneration (SCD) and sixteen of amyotrophic lateral sclerosis (ALS). Statistically significant positive correlations were found between I-V and III-V interpeak latencies (IPLs) and the area of cranium and brain in the longitudinal section of SCD patients, and between I-III and III-V IPLs and the area in the longitudinal section of those with ALS. And, also there were statistically significant correlations between the amplitude of the V wave and the area of brain stem as well as that of the cranium in the longitudinal section of SCD patients, and between the amplitude of the V wave and the area of the cerebrum in the longitudinal section of ALS. In conclusion, in the ABR, the IPLs were prolonged and the amplitude of the V wave was decreased while the MRI size of the cranium and brain increased. When the ABR is applied to neuro-degenerative disorders, it might be important to consider not only the conduction of the auditory tracts in the brain stem, but also the correlations of the size of the cranium and brain which act as a volume conductor. (author).

  4. Nonverbal auditory agnosia with lesion to Wernicke's area.

    Science.gov (United States)

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  5. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  6. The function of BDNF in the adult auditory system.

    Science.gov (United States)

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Children with reading disability show brain differences in effective connectivity for visual, but not auditory word comprehension.

    Directory of Open Access Journals (Sweden)

    Li Liu

    2010-10-01

    Full Text Available Previous literature suggests that those with reading disability (RD have more pronounced deficits during semantic processing in reading as compared to listening comprehension. This discrepancy has been supported by recent neuroimaging studies showing abnormal activity in RD during semantic processing in the visual but not in the auditory modality. Whether effective connectivity between brain regions in RD could also show this pattern of discrepancy has not been investigated.Children (8- to 14-year-olds were given a semantic task in the visual and auditory modality that required an association judgment as to whether two sequentially presented words were associated. Effective connectivity was investigated using Dynamic Causal Modeling (DCM on functional magnetic resonance imaging (fMRI data. Bayesian Model Selection (BMS was used separately for each modality to find a winning family of DCM models separately for typically developing (TD and RD children. BMS yielded the same winning family with modulatory effects on bottom-up connections from the input regions to middle temporal gyrus (MTG and inferior frontal gyrus(IFG with inconclusive evidence regarding top-down modulations. Bayesian Model Averaging (BMA was thus conducted across models in this winning family and compared across groups. The bottom-up effect from the fusiform gyrus (FG to MTG rather than the top-down effect from IFG to MTG was stronger in TD compared to RD for the visual modality. The stronger bottom-up influence in TD was only evident for related word pairs but not for unrelated pairs. No group differences were noted in the auditory modality.This study revealed a modality-specific deficit for children with RD in bottom-up effective connectivity from orthographic to semantic processing regions. There were no group differences in connectivity from frontal regions, suggesting that the core deficit in RD is not in top-down modulation.

  8. The musical centers of the brain: Vladimir E. Larionov (1857-1929) and the functional neuroanatomy of auditory perception.

    Science.gov (United States)

    Triarhou, Lazaros C; Verina, Tatyana

    2016-11-01

    In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Estimation of Temporary Change of Brain Activities in Auditory Oddball Paradigm

    Science.gov (United States)

    Fukami, Tadanori; Koyanagi, Yusuke; Tanno, Yukinori; Shimada, Takamasa; Akatsuka, Takao; Saito, Yoichi

    In this research, we estimated temporary change of brain activities in auditory oddball paradigm by moving an analysis time window. An advantage of this method is that it can acquire rough changes of activated areas even with data having low time resolution. Eight normal subjects participated in the study, which consisted of a random series of 30 target and 70 nontarget stimuli. We investigated the activated area in three kinds of analysis time sections, from stimulus onset to 5 seconds after the stimulus (time section A), from 2 to 7 seconds after (B) and from 4 to 9 seconds after (C). In time section A, representative activated areas were regions including superior temporal gyrus centered around inferior frontal gyrus, left precentral gyrus corresponding to Broadmann area 6 (BA 6), right fusiform gyrus corresponding to BA 20, bilaterally medial frontal gyrus and right inferior temporal gyrus were activated. In B, we could see the activations in bilatelally cerebellum, inferior frontal gyrus, and region including left motor area. In C, bilatelally postcentral gyrus, left cingulate gyrus , right cerebellum and right insula were activated. Most activations were consistent with previous studies.

  10. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  11. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    Science.gov (United States)

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  12. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    Science.gov (United States)

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  13. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  14. Mapping and characterization of positive and negative BOLD responses to visual stimulation in multiple brain regions at 7T.

    Science.gov (United States)

    Jorge, João; Figueiredo, Patrícia; Gruetter, Rolf; van der Zwaag, Wietske

    2018-02-20

    External stimuli and tasks often elicit negative BOLD responses in various brain regions, and growing experimental evidence supports that these phenomena are functionally meaningful. In this work, the high sensitivity available at 7T was explored to map and characterize both positive (PBRs) and negative BOLD responses (NBRs) to visual checkerboard stimulation, occurring in various brain regions within and beyond the visual cortex. Recently-proposed accelerated fMRI techniques were employed for data acquisition, and procedures for exclusion of large draining vein contributions, together with ICA-assisted denoising, were included in the analysis to improve response estimation. Besides the visual cortex, significant PBRs were found in the lateral geniculate nucleus and superior colliculus, as well as the pre-central sulcus; in these regions, response durations increased monotonically with stimulus duration, in tight covariation with the visual PBR duration. Significant NBRs were found in the visual cortex, auditory cortex, default-mode network (DMN) and superior parietal lobule; NBR durations also tended to increase with stimulus duration, but were significantly less sustained than the visual PBR, especially for the DMN and superior parietal lobule. Responses in visual and auditory cortex were further studied for checkerboard contrast dependence, and their amplitudes were found to increase monotonically with contrast, linearly correlated with the visual PBR amplitude. Overall, these findings suggest the presence of dynamic neuronal interactions across multiple brain regions, sensitive to stimulus intensity and duration, and demonstrate the richness of information obtainable when jointly mapping positive and negative BOLD responses at a whole-brain scale, with ultra-high field fMRI. © 2018 Wiley Periodicals, Inc.

  15. Auditory attention enhances processing of positive and negative words in inferior and superior prefrontal cortex.

    Science.gov (United States)

    Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna

    2017-11-01

    Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Early changes of auditory brain stem evoked response after radiotherapy for nasopharyngeal carcinoma - a prospective study

    Energy Technology Data Exchange (ETDEWEB)

    Lau, S K; Wei, W I; Sham, J S.T.; Choy, D T.K.; Hui, Y [Queen Mary Hospital, Hong Kong (Hong Kong)

    1992-10-01

    A prospective study of the effect of radiotherapy for nasopharyngeal carcinoma on hearing was carried out on 49 patients who had pure tone, impedance audiometry and auditory brain stem evoked response (ABR) recordings before, immediately, three, six and 12 months after radiotherapy. Fourteen patients complained of intermittent tinnitus after radiotherapy. We found that 11 initially normal ears of nine patients developed a middle ear effusion, three to six months after radiotherapy. There was mixed sensorineural and conductive hearing impairment after radiotherapy. Persistent impairment of ABR was detected immediately after completion of radiotherapy. The waves I-III and I-V interpeak latency intervals were significantly prolonged one year after radiotherapy. The study shows that radiotherapy for nasopharyngeal carcinoma impairs hearing by acting on the middle ear, the cochlea and the brain stem auditory pathway. (Author).

  17. Early changes of auditory brain stem evoked response after radiotherapy for nasopharyngeal carcinoma - a prospective study

    International Nuclear Information System (INIS)

    Lau, S.K.; Wei, W.I.; Sham, J.S.T.; Choy, D.T.K.; Hui, Y.

    1992-01-01

    A prospective study of the effect of radiotherapy for nasopharyngeal carcinoma on hearing was carried out on 49 patients who had pure tone, impedance audiometry and auditory brain stem evoked response (ABR) recordings before, immediately, three, six and 12 months after radiotherapy. Fourteen patients complained of intermittent tinnitus after radiotherapy. We found that 11 initially normal ears of nine patients developed a middle ear effusion, three to six months after radiotherapy. There was mixed sensorineural and conductive hearing impairment after radiotherapy. Persistent impairment of ABR was detected immediately after completion of radiotherapy. The waves I-III and I-V interpeak latency intervals were significantly prolonged one year after radiotherapy. The study shows that radiotherapy for nasopharyngeal carcinoma impairs hearing by acting on the middle ear, the cochlea and the brain stem auditory pathway. (Author)

  18. Brain Connectivity Networks and the Aesthetic Experience of Music.

    Science.gov (United States)

    Reybrouck, Mark; Vuust, Peter; Brattico, Elvira

    2018-06-12

    Listening to music is above all a human experience, which becomes an aesthetic experience when an individual immerses himself/herself in the music, dedicating attention to perceptual-cognitive-affective interpretation and evaluation. The study of these processes where the individual perceives, understands, enjoys and evaluates a set of auditory stimuli has mainly been focused on the effect of music on specific brain structures, as measured with neurophysiology and neuroimaging techniques. The very recent application of network science algorithms to brain research allows an insight into the functional connectivity between brain regions. These studies in network neuroscience have identified distinct circuits that function during goal-directed tasks and resting states. We review recent neuroimaging findings which indicate that music listening is traceable in terms of network connectivity and activations of target regions in the brain, in particular between the auditory cortex, the reward brain system and brain regions active during mind wandering.

  19. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    Science.gov (United States)

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W.J.; Willemsen, Antoon T.M.

    2007-01-01

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  1. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  2. Neurogenesis in the brain auditory pathway of a marsupial, the northern native cat (Dasyurus hallucatus)

    International Nuclear Information System (INIS)

    Aitkin, L.; Nelson, J.; Farrington, M.; Swann, S.

    1991-01-01

    Neurogenesis in the auditory pathway of the marsupial Dasyurus hallucatus was studied. Intraperitoneal injections of tritiated thymidine (20-40 microCi) were made into pouch-young varying from 1 to 56 days pouch-life. Animals were killed as adults and brain sections were prepared for autoradiography and counterstained with a Nissl stain. Neurons in the ventral cochlear nucleus were generated prior to 3 days pouch-life, in the superior olive at 5-7 days, and in the dorsal cochlear nucleus over a prolonged period. Inferior collicular neurogenesis lagged behind that in the medial geniculate, the latter taking place between days 3 and 9 and the former between days 7 and 22. Neurogenesis began in the auditory cortex on day 9 and was completed by about day 42. Thus neurogenesis was complete in the medullary auditory nuclei before that in the midbrain commenced, and in the medial geniculate before that in the auditory cortex commenced. The time course of neurogenesis in the auditory pathway of the native cat was very similar to that in another marsupial, the brushtail possum. For both, neurogenesis occurred earlier than in eutherian mammals of a similar size but was more protracted

  3. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  4. Effects of auditory recall experience on regional cerebral blood flow as assessed by 99m-Tc-HMPAO SPECT in 13 Post Traumatic Stress Disorder patients

    International Nuclear Information System (INIS)

    Pagani, M.M.E.; Salmaso, D.; Soares, J.; Aberg-Wistedt, A.; Sundin, O.; Jacobsson, H.; Larsson, S.A.; Haellstroem, T.

    2002-01-01

    Aim: Post Traumatic Stress Disorder (PTSD) is a severe condition affecting about 8% of population and increasing the risk of depression. PTSD patients, among other symptoms, suffer from intrusive distressing recollections of the traumatic event and avoidance of stimuli related to trauma. The aim of this study was to investigate the differences in regional cerebral blood flow (rCBF) between two groups of subjects exposed to the same type of traumatic stressor either developing PTSD or not. Materials and Methods: Thirteen subway drivers developing PTSD (PTSD) and 19 not developing PTSD (CTR) after being exposed to earlier person-under-the-train accident were included in the study. The rCBF distribution was compared between the two groups during a situation involving an auditory evoked re-experiencing of their traumatic event. 99m Tc-HMPAO SPECT, using a three-headed gamma camera, was performed and the radiopharmaceutical uptake in 7 bilateral regions of the brain was assessed using a standardised digitalised brain atlas. The chosen regions were those supposed to be involved in fear and emotional response and were located in the thalamus, limbic cortex and prefrontal, temporal and parietal lobes. Analysis of variance (ANOVA) was used to test the significance of the differences in flow in such functional regions. Results: In the global analysis, rCBF significantly differed between groups (0.04), hemispheres (p<0.02) and regions (p<0.0001). There was also a significant region x hemisphere interaction (p<0.0001). As compared to CTR, PTSD rCBF increased in the primary and associative auditory cortex (p<0.03) and in the temporal poles (p<0.02). Significant hemispheric differences were found in these latter regions (p<0.001 and p<0.0001, respectively), anterior cingulate cortex (p<0001) and multi-medial parietal association cortex (p<0.0001). Conclusions: Higher rCBF values in PTSD patients under recall of their traumatic experience were found as compared to CTR. The

  5. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    Science.gov (United States)

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  6. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    Science.gov (United States)

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  7. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    Science.gov (United States)

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  8. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    Directory of Open Access Journals (Sweden)

    Christos I Ioannou

    Full Text Available The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB. The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  9. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  10. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Science.gov (United States)

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  11. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  12. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy.

    Science.gov (United States)

    Koelsch, Stefan; Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with "small-world" properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex-and sensory systems in general-in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions.

  13. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    Directory of Open Access Journals (Sweden)

    Francisco Jose Alvarez

    Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  14. Differences in neurogenesis differentiate between core and shell regions of auditory nuclei in the turtle (Pelodiscus sinensis): evolutionary implications.

    Science.gov (United States)

    Zeng, Shao-Ju; Xi, Chao; Zhang, Xin-Wen; Zuo, Ming-Xue

    2007-01-01

    There is a clear core-versus-shell distinction in cytoarchitecture, electrophysiological properties and neural connections in the mesencephalic and diencephalic auditory nuclei of amniotes. Determining whether the embryogenesis of auditory nuclei shows a similar organization is helpful for further understanding the constituent organization and evolution of auditory nuclei. Therefore in the present study, we injected [(3)H]-thymidine into turtle embryos (Pelodiscus sinensis) at various stages of development. Upon hatching, [(3)H]-thymidine labeling was examined in both the core and shell auditory regions in the midbrain, diencephalon and dorsal ventricular ridge. Met-enkephalin and substance P immunohistochemistry was used to distinguish the core and shell regions. In the mesencephalic auditory nucleus, the occurrence of heavily labeled neurons in the nucleus centralis of the torus semicircularis reached its peak at embryonic day 9, one day later than the surrounding shell. In the diencephalic auditory nucleus, the production of heavily labeled neurons in the central region of the reuniens (Re) was highest at embryonic day (E) 8, one day later than that in the shell region of reuniens. In the region of the dorsal ventricular ridge that received inputs from the central region of Re, the appearance of heavily labeled neurons also reached a peak one day later than that in the area receiving inputs from the shell region of reuniens. Thus, there is a core-versus-shell organization of neuronal generation in reptilian auditory areas. Copyright (c) 2007 S. Karger AG, Basel.

  15. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations

    NARCIS (Netherlands)

    Curcic-Blake, Branislava; Ford, Judith M.; Hubl, Daniela; Orlov, Natasza D.; Sommer, Iris E.; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W.; David, Olivier; Mulert, Christoph; Woodward, Todd S.; Aleman, Andre

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of

  16. Auditory middle latency responses differ in right- and left-handed subjects: an evaluation through topographic brain mapping.

    Science.gov (United States)

    Mohebbi, Mehrnaz; Mahmoudian, Saeid; Alborzi, Marzieh Sharifian; Najafi-Koopaie, Mojtaba; Farahani, Ehsan Darestani; Farhadi, Mohammad

    2014-09-01

    To investigate the association of handedness with auditory middle latency responses (AMLRs) using topographic brain mapping by comparing amplitudes and latencies in frontocentral and hemispheric regions of interest (ROIs). The study included 44 healthy subjects with normal hearing (22 left handed and 22 right handed). AMLRs were recorded from 29 scalp electrodes in response to binaural 4-kHz tone bursts. Frontocentral ROI comparisons revealed that Pa and Pb amplitudes were significantly larger in the left-handed than the right-handed group. Topographic brain maps showed different distributions in AMLR components between the two groups. In hemispheric comparisons, Pa amplitude differed significantly across groups. A left-hemisphere emphasis of Pa was found in the right-handed group but not in the left-handed group. This study provides evidence that handedness is associated with AMLR components in frontocentral and hemispheric ROI. Handedness should be considered an essential factor in the clinical or experimental use of AMLRs.

  17. Synchronisation signatures in the listening brain: a perspective from non-invasive neuroelectrophysiology.

    Science.gov (United States)

    Weisz, Nathan; Obleser, Jonas

    2014-01-01

    Human magneto- and electroencephalography (M/EEG) are capable of tracking brain activity at millisecond temporal resolution in an entirely non-invasive manner, a feature that offers unique opportunities to uncover the spatiotemporal dynamics of the hearing brain. In general, precise synchronisation of neural activity within as well as across distributed regions is likely to subserve any cognitive process, with auditory cognition being no exception. Brain oscillations, in a range of frequencies, are a putative hallmark of this synchronisation process. Embedded in a larger effort to relate human cognition to brain oscillations, a field of research is emerging on how synchronisation within, as well as between, brain regions may shape auditory cognition. Combined with much improved source localisation and connectivity techniques, it has become possible to study directly the neural activity of auditory cortex with unprecedented spatio-temporal fidelity and to uncover frequency-specific long-range connectivities across the human cerebral cortex. In the present review, we will summarise recent contributions mainly of our laboratories to this emerging domain. We present (1) a more general introduction on how to study local as well as interareal synchronisation in human M/EEG; (2) how these networks may subserve and influence illusory auditory perception (clinical and non-clinical) and (3) auditory selective attention; and (4) how oscillatory networks further reflect and impact on speech comprehension. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Inter-subject synchronization of brain responses during natural music listening

    Science.gov (United States)

    Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod

    2015-01-01

    Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016

  19. Hippocampal volume and auditory attention on a verbal memory task with adult survivors of pediatric brain tumor.

    Science.gov (United States)

    Jayakar, Reema; King, Tricia Z; Morris, Robin; Na, Sabrina

    2015-03-01

    We examined the nature of verbal memory deficits and the possible hippocampal underpinnings in long-term adult survivors of childhood brain tumor. 35 survivors (M = 24.10 ± 4.93 years at testing; 54% female), on average 15 years post-diagnosis, and 59 typically developing adults (M = 22.40 ± 4.35 years, 54% female) participated. Automated FMRIB Software Library (FSL) tools were used to measure hippocampal, putamen, and whole brain volumes. The California Verbal Learning Test-Second Edition (CVLT-II) was used to assess verbal memory. Hippocampal, F(1, 91) = 4.06, ηp² = .04; putamen, F(1, 91) = 11.18, ηp² = .11; and whole brain, F(1, 92) = 18.51, ηp² = .17, volumes were significantly lower for survivors than controls (p memory indices of auditory attention list span (Trial 1: F(1, 92) = 12.70, η² = .12) and final list learning (Trial 5: F(1, 92) = 6.01, η² = .06) were significantly lower for survivors (p attention, but none of the other CVLT-II indices. Secondary analyses for the effect of treatment factors are presented. Volumetric differences between survivors and controls exist for the whole brain and for subcortical structures on average 15 years post-diagnosis. Treatment factors seem to have a unique effect on subcortical structures. Memory differences between survivors and controls are largely contingent upon auditory attention list span. Only hippocampal volume is associated with the auditory attention list span component of verbal memory. These findings are particularly robust for survivors treated with radiation. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  20. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    Science.gov (United States)

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  1. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Neural plasticity expressed in central auditory structures with and without tinnitus

    Directory of Open Access Journals (Sweden)

    Larry E Roberts

    2012-05-01

    Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.

  3. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  4. An auditory oddball brain-computer interface for binary choices.

    Science.gov (United States)

    Halder, S; Rea, M; Andreoni, R; Nijboer, F; Hammer, E M; Kleih, S C; Birbaumer, N; Kübler, A

    2010-04-01

    Brain-computer interfaces (BCIs) provide non-muscular communication for individuals diagnosed with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)). In the final stages of the disease, a BCI cannot rely on the visual modality. This study examined a method to achieve high accuracies using auditory stimuli only. We propose an auditory BCI based on a three-stimulus paradigm. This paradigm is similar to the standard oddball but includes an additional target (i.e. two target stimuli, one frequent stimulus). Three versions of the task were evaluated in which the target stimuli differed in loudness, pitch or direction. Twenty healthy participants achieved an average information transfer rate (ITR) of up to 2.46 bits/min and accuracies of 78.5%. Most subjects (14 of 20) achieved their best performance with targets differing in pitch. With this study, the viability of the paradigm was shown for healthy participants and will next be evaluated with individuals diagnosed with ALS or locked-in syndrome (LIS) after stroke. The here presented BCI offers communication with binary choices (yes/no) independent of vision. As it requires only little time per selection, it may constitute a reliable means of communication for patients who lost all motor function and have a short attention span. 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Prestimulus influences on auditory perception from sensory representations and decision processes.

    Science.gov (United States)

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  6. Category-specific responses to faces and objects in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Kari L Hoffman

    2008-03-01

    Full Text Available Auditory and visual signals often occur together, and the two sensory channels are known to infl uence each other to facilitate perception. The neural basis of this integration is not well understood, although other forms of multisensory infl uences have been shown to occur at surprisingly early stages of processing in cortex. Primary visual cortex neurons can show frequency-tuning to auditory stimuli, and auditory cortex responds selectively to certain somatosensory stimuli, supporting the possibility that complex visual signals may modulate early stages of auditory processing. To elucidate which auditory regions, if any, are responsive to complex visual stimuli, we recorded from auditory cortex and the superior temporal sulcus while presenting visual stimuli consisting of various objects, neutral faces, and facial expressions generated during vocalization. Both objects and conspecifi c faces elicited robust fi eld potential responses in auditory cortex sites, but the responses varied by category: both neutral and vocalizing faces had a highly consistent negative component (N100 followed by a broader positive component (P180 whereas object responses were more variable in time and shape, but could be discriminated consistently from the responses to faces. The face response did not vary within the face category, i.e., for expressive vs. neutral face stimuli. The presence of responses for both objects and neutral faces suggests that auditory cortex receives highly informative visual input that is not restricted to those stimuli associated with auditory components. These results reveal selectivity for complex visual stimuli in a brain region conventionally described as non-visual unisensory cortex.

  7. Music in the recovering brain

    OpenAIRE

    Särkämö, Teppo

    2011-01-01

    Listening to music involves a widely distributed bilateral network of brain regions that controls many auditory perceptual, cognitive, emotional, and motor functions. Exposure to music can also temporarily improve mood, reduce stress, and enhance cognitive performance as well as promote neural plasticity. However, very little is currently known about the relationship between music perception and auditory and cognitive processes or about the potential therapeutic effects of listening to music ...

  8. Stereotactically-guided Ablation of the Rat Auditory Cortex, and Localization of the Lesion in the Brain.

    Science.gov (United States)

    Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A

    2017-10-11

    The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.

  9. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  10. Familiar auditory sensory training in chronic traumatic brain injury: a case study.

    Science.gov (United States)

    Sullivan, Emily Galassi; Guernon, Ann; Blabas, Brett; Herrold, Amy A; Pape, Theresa L-B

    2018-04-01

    The evaluation and treatment for patients with prolonged periods of seriously impaired consciousness following traumatic brain injury (TBI), such as a vegetative or minimally conscious state, poses considerable challenges, particularly in the chronic phases of recovery. This blinded crossover study explored the effects of familiar auditory sensory training (FAST) compared with a sham stimulation in a patient seven years post severe TBI. Baseline data were collected over 4 weeks to account for variability in status with neurobehavioral measures, including the Disorders of Consciousness scale (DOCS), Coma Near Coma scale (CNC), and Consciousness Screening Algorithm. Pre-stimulation neurophysiological assessments were completed as well, namely Brainstem Auditory Evoked Potentials (BAEP) and Somatosensory Evoked Potentials (SSEP). Results revealed that a significant improvement in the DOCS neurobehavioral findings after FAST, which was not maintained during the sham. BAEP findings also improved with maintenance of these improvements following sham stimulation as evidenced by repeat testing. The results emphasize the importance for continued evaluation and treatment of individuals in chronic states of seriously impaired consciousness with a variety of tools. Further study of auditory stimulation as a passive treatment paradigm for this population is warranted. Implications for Rehabilitation Clinicians should be equipped with treatment options to enhance neurobehavioral improvements when traditional treatment methods fail to deliver or maintain functional behavioral changes. Routine assessment is crucial to detect subtle changes in neurobehavioral function even in chronic states of disordered consciousness and determine potential preserved cognitive abilities that may not be evident due to unreliable motor responses given motoric impairments. Familiar Auditory Stimulation Training (FAST) is an ideal passive stimulation that can be supplied by families, allied health

  11. Human-like brain hemispheric dominance in birdsong learning.

    Science.gov (United States)

    Moorman, Sanne; Gobes, Sharon M H; Kuijpers, Maaike; Kerkhofs, Amber; Zandbergen, Matthijs A; Bolhuis, Johan J

    2012-07-31

    Unlike nonhuman primates, songbirds learn to vocalize very much like human infants acquire spoken language. In humans, Broca's area in the frontal lobe and Wernicke's area in the temporal lobe are crucially involved in speech production and perception, respectively. Songbirds have analogous brain regions that show a similar neural dissociation between vocal production and auditory perception and memory. In both humans and songbirds, there is evidence for lateralization of neural responsiveness in these brain regions. Human infants already show left-sided dominance in their brain activation when exposed to speech. Moreover, a memory-specific left-sided dominance in Wernicke's area for speech perception has been demonstrated in 2.5-mo-old babies. It is possible that auditory-vocal learning is associated with hemispheric dominance and that this association arose in songbirds and humans through convergent evolution. Therefore, we investigated whether there is similar song memory-related lateralization in the songbird brain. We exposed male zebra finches to tutor or unfamiliar song. We found left-sided dominance of neuronal activation in a Broca-like brain region (HVC, a letter-based name) of juvenile and adult zebra finch males, independent of the song stimulus presented. In addition, juvenile males showed left-sided dominance for tutor song but not for unfamiliar song in a Wernicke-like brain region (the caudomedial nidopallium). Thus, left-sided dominance in the caudomedial nidopallium was specific for the song-learning phase and was memory-related. These findings demonstrate a remarkable neural parallel between birdsong and human spoken language, and they have important consequences for our understanding of the evolution of auditory-vocal learning and its neural mechanisms.

  12. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    OpenAIRE

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnove; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dua...

  13. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  14. Seeing the sound after visual loss: functional MRI in acquired auditory-visual synesthesia.

    Science.gov (United States)

    Yong, Zixin; Hsieh, Po-Jang; Milea, Dan

    2017-02-01

    Acquired auditory-visual synesthesia (AVS) is a rare neurological sign, in which specific auditory stimulation triggers visual experience. In this study, we used event-related fMRI to explore the brain regions correlated with acquired monocular sound-induced phosphenes, which occurred 2 months after unilateral visual loss due to an ischemic optic neuropathy. During the fMRI session, 1-s pure tones at various pitches were presented to the patient, who was asked to report occurrence of sound-induced phosphenes by pressing one of the two buttons (yes/no). The brain activation during phosphene-experienced trials was contrasted with non-phosphene trials and compared to results obtained in one healthy control subject who underwent the same fMRI protocol. Our results suggest, for the first time, that acquired AVS occurring after visual impairment is associated with bilateral activation of primary and secondary visual cortex, possibly due to cross-wiring between auditory and visual sensory modalities.

  15. Evolution of brain region volumes during artificial selection for relative brain size.

    Science.gov (United States)

    Kotrschal, Alexander; Zeng, Hong-Li; van der Bijl, Wouter; Öhman-Mägi, Caroline; Kotrschal, Kurt; Pelckmans, Kristiaan; Kolm, Niclas

    2017-12-01

    The vertebrate brain shows an extremely conserved layout across taxa. Still, the relative sizes of separate brain regions vary markedly between species. One interesting pattern is that larger brains seem associated with increased relative sizes only of certain brain regions, for instance telencephalon and cerebellum. Till now, the evolutionary association between separate brain regions and overall brain size is based on comparative evidence and remains experimentally untested. Here, we test the evolutionary response of brain regions to directional selection on brain size in guppies (Poecilia reticulata) selected for large and small relative brain size. In these animals, artificial selection led to a fast response in relative brain size, while body size remained unchanged. We use microcomputer tomography to investigate how the volumes of 11 main brain regions respond to selection for larger versus smaller brains. We found no differences in relative brain region volumes between large- and small-brained animals and only minor sex-specific variation. Also, selection did not change allometric scaling between brain and brain region sizes. Our results suggest that brain regions respond similarly to strong directional selection on relative brain size, which indicates that brain anatomy variation in contemporary species most likely stem from direct selection on key regions. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  16. The Brain on Music

    Indian Academy of Sciences (India)

    effects of music training on auditory .... dala but is distributed over a network of regions that also in- clude the ... In addition to the emotional impact of music on the brain, these ... social cognition, contact, copathy, and social cohesion in a group.

  17. Sex-Specific Brain Deficits in Auditory Processing in an Animal Model of Cocaine-Related Schizophrenic Disorders

    Directory of Open Access Journals (Sweden)

    Patricia A. Broderick

    2013-04-01

    Full Text Available Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]. The startle method monitors pre-pulse inhibition (PPI as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]. Key findings are: (a Cocaine significantly reduced PPI in both sexes. (b Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c Physiological saline had no effect on startle in

  18. Neuropsychopharmacology of auditory hallucinations: insights from pharmacological functional MRI and perspectives for future research.

    Science.gov (United States)

    Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina

    2013-01-01

    Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.

  19. Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface

    Directory of Open Access Journals (Sweden)

    N. Jeremy Hill

    2012-12-01

    Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely

  20. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    Science.gov (United States)

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  1. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  2. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  3. Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm

    Science.gov (United States)

    Höhne, Johannes; Tangermann, Michael

    2014-01-01

    Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978

  4. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  5. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    Science.gov (United States)

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was

  6. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  7. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    Science.gov (United States)

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Brain regions with mirror properties: a meta-analysis of 125 human fMRI studies.

    Science.gov (United States)

    Molenberghs, Pascal; Cunnington, Ross; Mattingley, Jason B

    2012-01-01

    Mirror neurons in macaque area F5 fire when an animal performs an action, such as a mouth or limb movement, and also when the animal passively observes an identical or similar action performed by another individual. Brain-imaging studies in humans conducted over the last 20 years have repeatedly attempted to reveal analogous brain regions with mirror properties in humans, with broad and often speculative claims about their functional significance across a range of cognitive domains, from language to social cognition. Despite such concerted efforts, the likely neural substrates of these mirror regions have remained controversial, and indeed the very existence of a distinct subcategory of human neurons with mirroring properties has been questioned. Here we used activation likelihood estimation (ALE), to provide a quantitative index of the consistency of patterns of fMRI activity measured in human studies of action observation and action execution. From an initial sample of more than 300 published works, data from 125 papers met our strict inclusion and exclusion criteria. The analysis revealed 14 separate clusters in which activation has been consistently attributed to brain regions with mirror properties, encompassing 9 different Brodmann areas. These clusters were located in areas purported to show mirroring properties in the macaque, such as the inferior parietal lobule, inferior frontal gyrus and the adjacent ventral premotor cortex, but surprisingly also in regions such as the primary visual cortex, cerebellum and parts of the limbic system. Our findings suggest a core network of human brain regions that possess mirror properties associated with action observation and execution, with additional areas recruited during tasks that engage non-motor functions, such as auditory, somatosensory and affective components. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  9. The overlapping community structure of structural brain network in young healthy individuals.

    Directory of Open Access Journals (Sweden)

    Kai Wu

    2011-05-01

    Full Text Available Community structure is a universal and significant feature of many complex networks in biology, society, and economics. Community structure has also been revealed in human brain structural and functional networks in previous studies. However, communities overlap and share many edges and nodes. Uncovering the overlapping community structure of complex networks remains largely unknown in human brain networks. Here, using regional gray matter volume, we investigated the structural brain network among 90 brain regions (according to a predefined anatomical atlas in 462 young, healthy individuals. Overlapped nodes between communities were defined by assuming that nodes (brain regions can belong to more than one community. We demonstrated that 90 brain regions were organized into 5 overlapping communities associated with several well-known brain systems, such as the auditory/language, visuospatial, emotion, decision-making, social, control of action, memory/learning, and visual systems. The overlapped nodes were mostly involved in an inferior-posterior pattern and were primarily related to auditory and visual perception. The overlapped nodes were mainly attributed to brain regions with higher node degrees and nodal efficiency and played a pivotal role in the flow of information through the structural brain network. Our results revealed fuzzy boundaries between communities by identifying overlapped nodes and provided new insights into the understanding of the relationship between the structure and function of the human brain. This study provides the first report of the overlapping community structure of the structural network of the human brain.

  10. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  11. Subthalamic nucleus deep brain stimulation affects distractor interference in auditory working memory.

    Science.gov (United States)

    Camalier, Corrie R; Wang, Alice Y; McIntosh, Lindsey G; Park, Sohee; Neimat, Joseph S

    2017-03-01

    Computational and theoretical accounts hypothesize the basal ganglia play a supramodal "gating" role in the maintenance of working memory representations, especially in preservation from distractor interference. There are currently two major limitations to this account. The first is that supporting experiments have focused exclusively on the visuospatial domain, leaving questions as to whether such "gating" is domain-specific. The second is that current evidence relies on correlational measures, as it is extremely difficult to causally and reversibly manipulate subcortical structures in humans. To address these shortcomings, we examined non-spatial, auditory working memory performance during reversible modulation of the basal ganglia, an approach afforded by deep brain stimulation of the subthalamic nucleus. We found that subthalamic nucleus stimulation impaired auditory working memory performance, specifically in the group tested in the presence of distractors, even though the distractors were predictable and completely irrelevant to the encoding of the task stimuli. This study provides key causal evidence that the basal ganglia act as a supramodal filter in working memory processes, further adding to our growing understanding of their role in cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. From sensation to percept: the neural signature of auditory event-related potentials.

    Science.gov (United States)

    Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven

    2014-05-01

    An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300). Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  14. Affective Stimuli for an Auditory P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Akinari Onishi

    2017-09-01

    Full Text Available Gaze-independent brain computer interfaces (BCIs are a potential communication tool for persons with paralysis. This study applies affective auditory stimuli to investigate their effects using a P300 BCI. Fifteen able-bodied participants operated the P300 BCI, with positive and negative affective sounds (PA: a meowing cat sound, NA: a screaming cat sound. Permuted stimuli of the positive and negative affective sounds (permuted-PA, permuted-NA were also used for comparison. Electroencephalography data was collected, and offline classification accuracies were compared. We used a visual analog scale (VAS to measure positive and negative affective feelings in the participants. The mean classification accuracies were 84.7% for PA and 67.3% for permuted-PA, while the VAS scores were 58.5 for PA and −12.1 for permuted-PA. The positive affective stimulus showed significantly higher accuracy and VAS scores than the negative affective stimulus. In contrast, mean classification accuracies were 77.3% for NA and 76.0% for permuted-NA, while the VAS scores were −50.0 for NA and −39.2 for permuted NA, which are not significantly different. We determined that a positive affective stimulus with accompanying positive affective feelings significantly improved BCI accuracy. Additionally, an ALS patient achieved 90% online classification accuracy. These results suggest that affective stimuli may be useful for preparing a practical auditory BCI system for patients with disabilities.

  15. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Yuko Yoshimura

    2016-01-01

    Full Text Available The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  16. The impact of auditory working memory training on the fronto-parietal working memory network.

    Science.gov (United States)

    Schneiders, Julia A; Opitz, Bertram; Tang, Huijun; Deng, Yuan; Xie, Chaoxiang; Li, Hong; Mecklinger, Axel

    2012-01-01

    Working memory training has been widely used to investigate working memory processes. We have shown previously that visual working memory benefits only from intra-modal visual but not from across-modal auditory working memory training. In the present functional magnetic resonance imaging study we examined whether auditory working memory processes can also be trained specifically and which training-induced activation changes accompany theses effects. It was investigated whether working memory training with strongly distinct auditory materials transfers exclusively to an auditory (intra-modal) working memory task or whether it generalizes to a (across-modal) visual working memory task. We used adaptive n-back training with tonal sequences and a passive control condition. The memory training led to a reliable training gain. Transfer effects were found for the (intra-modal) auditory but not for the (across-modal) visual transfer task. Training-induced activation decreases in the auditory transfer task were found in two regions in the right inferior frontal gyrus. These effects confirm our previous findings in the visual modality and extents intra-modal effects in the prefrontal cortex to the auditory modality. As the right inferior frontal gyrus is frequently found in maintaining modality-specific auditory information, these results might reflect increased neural efficiency in auditory working memory processes. Furthermore, task-unspecific (amodal) activation decreases in the visual and auditory transfer task were found in the right inferior parietal lobule and the superior portion of the right middle frontal gyrus reflecting less demand on general attentional control processes. These data are in good agreement with amodal activation decreases within the same brain regions on a visual transfer task reported previously.

  17. The impact of auditory working memory training on the fronto-parietal working memory network

    Science.gov (United States)

    Schneiders, Julia A.; Opitz, Bertram; Tang, Huijun; Deng, Yuan; Xie, Chaoxiang; Li, Hong; Mecklinger, Axel

    2012-01-01

    Working memory training has been widely used to investigate working memory processes. We have shown previously that visual working memory benefits only from intra-modal visual but not from across-modal auditory working memory training. In the present functional magnetic resonance imaging study we examined whether auditory working memory processes can also be trained specifically and which training-induced activation changes accompany theses effects. It was investigated whether working memory training with strongly distinct auditory materials transfers exclusively to an auditory (intra-modal) working memory task or whether it generalizes to a (across-modal) visual working memory task. We used adaptive n-back training with tonal sequences and a passive control condition. The memory training led to a reliable training gain. Transfer effects were found for the (intra-modal) auditory but not for the (across-modal) visual transfer task. Training-induced activation decreases in the auditory transfer task were found in two regions in the right inferior frontal gyrus. These effects confirm our previous findings in the visual modality and extents intra-modal effects in the prefrontal cortex to the auditory modality. As the right inferior frontal gyrus is frequently found in maintaining modality-specific auditory information, these results might reflect increased neural efficiency in auditory working memory processes. Furthermore, task-unspecific (amodal) activation decreases in the visual and auditory transfer task were found in the right inferior parietal lobule and the superior portion of the right middle frontal gyrus reflecting less demand on general attentional control processes. These data are in good agreement with amodal activation decreases within the same brain regions on a visual transfer task reported previously. PMID:22701418

  18. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    Science.gov (United States)

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  19. Functional MRI of the vocalization-processing network in the macaque brain

    Directory of Open Access Journals (Sweden)

    Michael eOrtiz-Rios

    2015-04-01

    Full Text Available Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC, medial geniculate nucleus (MGN, auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG and sulcus (STS. Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (scrambled calls also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt.

  20. Propofol disrupts functional interactions between sensory and high-order processing of auditory verbal memory.

    Science.gov (United States)

    Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G

    2012-10-01

    Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.

  1. Dynamic reconfiguration of human brain functional networks through neurofeedback.

    Science.gov (United States)

    Haller, Sven; Kopel, Rotem; Jhooti, Permi; Haas, Tanja; Scharnowski, Frank; Lovblad, Karl-Olof; Scheffler, Klaus; Van De Ville, Dimitri

    2013-11-01

    Recent fMRI studies demonstrated that functional connectivity is altered following cognitive tasks (e.g., learning) or due to various neurological disorders. We tested whether real-time fMRI-based neurofeedback can be a tool to voluntarily reconfigure brain network interactions. To disentangle learning-related from regulation-related effects, we first trained participants to voluntarily regulate activity in the auditory cortex (training phase) and subsequently asked participants to exert learned voluntary self-regulation in the absence of feedback (transfer phase without learning). Using independent component analysis (ICA), we found network reconfigurations (increases in functional network connectivity) during the neurofeedback training phase between the auditory target region and (1) the auditory pathway; (2) visual regions related to visual feedback processing; (3) insula related to introspection and self-regulation and (4) working memory and high-level visual attention areas related to cognitive effort. Interestingly, the auditory target region was identified as the hub of the reconfigured functional networks without a-priori assumptions. During the transfer phase, we again found specific functional connectivity reconfiguration between auditory and attention network confirming the specific effect of self-regulation on functional connectivity. Functional connectivity to working memory related networks was no longer altered consistent with the absent demand on working memory. We demonstrate that neurofeedback learning is mediated by widespread changes in functional connectivity. In contrast, applying learned self-regulation involves more limited and specific network changes in an auditory setup intended as a model for tinnitus. Hence, neurofeedback training might be used to promote recovery from neurological disorders that are linked to abnormal patterns of brain connectivity. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    Science.gov (United States)

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  3. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex

    Science.gov (United States)

    Smiley, John F.; Schroeder, Charles E.

    2017-01-01

    Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is

  6. Age and Gender Effects On Auditory Brain Stem Response (ABR

    Directory of Open Access Journals (Sweden)

    Yones Lotfi

    2012-10-01

    Full Text Available Objectives: Auditory Brain Stem Response (ABR is a result of eight nerve and brain stem nuclei stimulation. Several factors may affect the latencies, interpeak latencies and amplitudes in ABR especially sex and age. In this study, age and sex influence on ABR were studied. Methods: This study was performed on 120 cases (60 males and 60 females at Akhavan rehabilitation center of university of welfare and rehabilitation sciences, Tehran, Iran. Cases were divided in three age groups: 18-30, 31-50 and 51-70 years old. Each age group consists of 20 males and 20 females. Age and sex influences on absolute latency of wave I and V, and IPL of I-V were examined. Results: Independent t test showed that females have significantly shorter latency of wave I, V, and IPL I-V latency (P<0.001 than males. Two way ANOVA showed that latency of wave I, V and IPL I-V in 51-70 years old group was significantly higher than 18-30 and 31-50 years old groups (P<0.001 Discussion: According to the results of present study and similar studies, in clinical practice, different norms for older adults and both genders should be established.

  7. Entrainment to an auditory signal: Is attention involved?

    NARCIS (Netherlands)

    Kunert, R.; Jongman, S.R.

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of

  8. Functional associations at global brain level during perception of an auditory illusion by applying maximal information coefficient

    Science.gov (United States)

    Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos

    2018-02-01

    Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.

  9. Anatomy, Physiology and Function of the Auditory System

    Science.gov (United States)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  10. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Decreased auditory GABA+ concentrations in presbycusis demonstrated by edited magnetic resonance spectroscopy.

    Science.gov (United States)

    Gao, Fei; Wang, Guangbin; Ma, Wen; Ren, Fuxin; Li, Muwei; Dong, Yuling; Liu, Cheng; Liu, Bo; Bai, Xue; Zhao, Bin; Edden, Richard A E

    2015-02-01

    Gamma-aminobutyric acid (GABA) is the main inhibitory neurotransmitter in the central auditory system. Altered GABAergic neurotransmission has been found in both the inferior colliculus and the auditory cortex in animal models of presbycusis. Edited magnetic resonance spectroscopy (MRS), using the MEGA-PRESS sequence, is the most widely used technique for detecting GABA in the human brain. However, to date there has been a paucity of studies exploring changes to the GABA concentrations in the auditory region of patients with presbycusis. In this study, sixteen patients with presbycusis (5 males/11 females, mean age 63.1 ± 2.6 years) and twenty healthy controls (6 males/14 females, mean age 62.5 ± 2.3 years) underwent audiological and MRS examinations. Pure tone audiometry from 0.125 to 8 kHz and tympanometry were used to assess the hearing abilities of all subjects. The pure tone average (PTA; the average of hearing thresholds at 0.5, 1, 2 and 4 kHz) was calculated. The MEGA-PRESS sequence was used to measure GABA+ concentrations in 4 × 3 × 3 cm(3) volumes centered on the left and right Heschl's gyri. GABA+ concentrations were significantly lower in the presbycusis group compared to the control group (left auditory regions: p = 0.002, right auditory regions: p = 0.008). Significant negative correlations were observed between PTA and GABA+ concentrations in the presbycusis group (r = -0.57, p = 0.02), while a similar trend was found in the control group (r = -0.40, p = 0.08). These results are consistent with a hypothesis of dysfunctional GABAergic neurotransmission in the central auditory system in presbycusis and suggest a potential treatment target for presbycusis. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. The Impact of Auditory Working Memory Training on the Fronto-Parietal Working Memory Network

    Directory of Open Access Journals (Sweden)

    Julia eSchneiders

    2012-06-01

    Full Text Available Working memory training has been widely used to investigate working memory processes. We have shown previously that visual working memory benefits only from intra-modal visual but not from across-modal auditory working memory training. In the present functional magnetic resonance imaging study we examined whether auditory working memory processes can also be trained specifically and which training-induced activation changes accompany theses effects. It was investigated whether working memory training with strongly distinct auditory materials transfers exclusively to an auditory (intra-modal working memory task or whether it generalizes to an (across-modal visual working memory task. We used an adaptive n-back training with tonal sequences and a passive control condition. The memory training led to a reliable training gain. Transfer effects were found for the (intra-modal auditory but not for the (across-modal visual 2-back task. Training-induced activation changes in the auditory 2-back task were found in two regions in the right inferior frontal gyrus. These effects confirm our previous findings in the visual modality and extends intra-modal effects to the auditory modality. These results might reflect increased neural efficiency in auditory working memory processes as in the right inferior frontal gyrus is frequently found in maintaining modality-specific auditory information. By this, these effects are analogical to the activation decreases in the right middle frontal gyrus for the visual modality in our previous study. Furthermore, task-unspecific (across-modal activation decreases in the visual and auditory 2-back task were found in the right inferior parietal lobule and the superior portion of the right middle frontal gyrus reflecting less demands on general attentional control processes. These data are in good agreement with across-modal activation decreases within the same brain regions on a visual 2-back task reported previously.

  14. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  15. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  16. An evaluation of training with an auditory P300 brain-computer interface for the Japanese Hiragana syllabary

    Directory of Open Access Journals (Sweden)

    Sebastian Halder

    2016-09-01

    Full Text Available Gaze-independent brain-computer interfaces (BCIs are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants (N=6 were asked to select 25 syllables (out of fifty possible choices using a two step procedure: first the consonant (ten choices and then the vowel (five choices. This was repeated on three separate days. Additionally, a person with spinal cord injury (SCI participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min to 56% (2 bits/min in session three. Reliable selections from a 10×5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.

  17. An Evaluation of Training with an Auditory P300 Brain-Computer Interface for the Japanese Hiragana Syllabary.

    Science.gov (United States)

    Halder, Sebastian; Takano, Kouji; Ora, Hiroki; Onishi, Akinari; Utsumi, Kota; Kansaku, Kenji

    2016-01-01

    Gaze-independent brain-computer interfaces (BCIs) are a possible communication channel for persons with paralysis. We investigated if it is possible to use auditory stimuli to create a BCI for the Japanese Hiragana syllabary, which has 46 Hiragana characters. Additionally, we investigated if training has an effect on accuracy despite the high amount of different stimuli involved. Able-bodied participants ( N = 6) were asked to select 25 syllables (out of fifty possible choices) using a two step procedure: First the consonant (ten choices) and then the vowel (five choices). This was repeated on 3 separate days. Additionally, a person with spinal cord injury (SCI) participated in the experiment. Four out of six healthy participants reached Hiragana syllable accuracies above 70% and the information transfer rate increased from 1.7 bits/min in the first session to 3.2 bits/min in the third session. The accuracy of the participant with SCI increased from 12% (0.2 bits/min) to 56% (2 bits/min) in session three. Reliable selections from a 10 × 5 matrix using auditory stimuli were possible and performance is increased by training. We were able to show that auditory P300 BCIs can be used for communication with up to fifty symbols. This enables the use of the technology of auditory P300 BCIs with a variety of applications.

  18. Brain bases for auditory stimulus-driven figure-ground segregation.

    Science.gov (United States)

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  19. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  20. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  1. Reduced neuronal activity in language-related regions after transcranial magnetic stimulation therapy for auditory verbal hallucinations.

    Science.gov (United States)

    Kindler, Jochen; Homan, Philipp; Jann, Kay; Federspiel, Andrea; Flury, Richard; Hauf, Martinus; Strik, Werner; Dierks, Thomas; Hubl, Daniela

    2013-03-15

    Transcranial magnetic stimulation (TMS) is a novel therapeutic approach, used in patients with pharmacoresistant auditory verbal hallucinations (AVH). To investigate the neurobiological effects of TMS on AVH, we measured cerebral blood flow with pseudo-continuous magnetic resonance-arterial spin labeling 20 ± 6 hours before and after TMS treatment. Thirty patients with schizophrenia or schizoaffective disorder were investigated. Fifteen patients received a 10-day TMS treatment to the left temporoparietal cortex, and 15 received the standard treatment. The stimulation location was chosen according to an individually determined language region determined by a functional magnetic resonance imaging language paradigm, which identified the sensorimotor language area, area Spt (sylvian parietotemporal), as the target region. TMS-treated patients showed positive clinical effects, which were indicated by a reduction in AVH scores (p ≤ .001). Cerebral blood flow was significantly decreased in the primary auditory cortex (p ≤ .001), left Broca's area (p ≤ .001), and cingulate gyrus (p ≤ .001). In control subjects, neither positive clinical effects nor cerebral blood flow decreases were detected. The decrease in cerebral blood flow in the primary auditory cortex correlated with the decrease in AVH scores (p ≤ .001). TMS reverses hyperactivity of language regions involved in the emergence of AVH. Area Spt acts as a gateway to the hallucination-generating cerebral network. Successful therapy corresponded to decreased cerebral blood flow in the primary auditory cortex, supporting its crucial role in triggering AVH and contributing to the physical quality of the false perceptions. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  2. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  3. Music training for the development of auditory skills.

    Science.gov (United States)

    Kraus, Nina; Chandrasekaran, Bharath

    2010-08-01

    The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.

  4. State-dependent changes in auditory sensory gating in different cortical areas in rats.

    Directory of Open Access Journals (Sweden)

    Renli Qi

    Full Text Available Sensory gating is a process in which the brain's response to a repetitive stimulus is attenuated; it is thought to contribute to information processing by enabling organisms to filter extraneous sensory inputs from the environment. To date, sensory gating has typically been used to determine whether brain function is impaired, such as in individuals with schizophrenia or addiction. In healthy subjects, sensory gating is sensitive to a subject's behavioral state, such as acute stress and attention. The cortical response to sensory stimulation significantly decreases during sleep; however, information processing continues throughout sleep, and an auditory evoked potential (AEP can be elicited by sound. It is not known whether sensory gating changes during sleep. Sleep is a non-uniform process in the whole brain with regional differences in neural activities. Thus, another question arises concerning whether sensory gating changes are uniform in different brain areas from waking to sleep. To address these questions, we used the sound stimuli of a Conditioning-testing paradigm to examine sensory gating during waking, rapid eye movement (REM sleep and Non-REM (NREM sleep in different cortical areas in rats. We demonstrated the following: 1. Auditory sensory gating was affected by vigilant states in the frontal and parietal areas but not in the occipital areas. 2. Auditory sensory gating decreased in NREM sleep but not REM sleep from waking in the frontal and parietal areas. 3. The decreased sensory gating in the frontal and parietal areas during NREM sleep was the result of a significant increase in the test sound amplitude.

  5. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  6. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  7. Multivariate evaluation of brain function by measuring regional cerebral blood flow and event-related potentials

    Energy Technology Data Exchange (ETDEWEB)

    Koga, Yoshihiko; Mochida, Masahiko; Shutara, Yoshikazu; Nakagawa, Kazumi [Kyorin Univ., Mitaka, Tokyo (Japan). School of Medicine; Nagata, Ken

    1998-07-01

    To measure the effect of events on human cognitive function, effects of odors by measurement regional cerebral blood flow (rCBF) and P300 were evaluated during the auditory odd-ball exercise. PET showed the increase in rCBF on the right hemisphere of the brain by coffee aroma. rCBF was measured by PET in 9 of right-handed healthy adults men, and P300 was by event-related potential (ERP) in each sex of 20 right-handed healthy adults. ERP showed the difference of the P300 amplitude between men and women, and showed the tendency, by odors except the lavender oil, that women had higher in the P300 amplitude than men. These results suggest the presence of effects on the cognitive function through emotional actions. Next, the relationship between rCBF and ERP were evaluated. The subjects were 9 of the right-handed healthy adults (average: 25.6{+-}3.4 years old). rCBF by PET and P300 amplitude by ERP were simultaneously recorded during the auditory odd-ball exercise using the tone-burst method (2 kHz of the low frequency aimed stimuli and 1 kHz of the high frequency non-aimed stimuli). The rCBF value was the highest at the transverse gyrus of Heschl and the lowest at the piriform cortex among 24 regions of interest (ROI) from both sides. The difference of P300 peak latent time among ROI was almost the same. The brain waves from Cz and Pz were similar and the average amplitude was highest at Pz. We found the high correlation in the right piriform cortex (Fz), and right (Fz, Cz) and left (Cz, Pz) transverse gyrus of Heschl between the P300 amplitude and rCBF. (K.H.)

  8. Multivariate evaluation of brain function by measuring regional cerebral blood flow and event-related potentials

    International Nuclear Information System (INIS)

    Koga, Yoshihiko; Mochida, Masahiko; Shutara, Yoshikazu; Nakagawa, Kazumi; Nagata, Ken

    1998-01-01

    To measure the effect of events on human cognitive function, effects of odors by measurement regional cerebral blood flow (rCBF) and P300 were evaluated during the auditory odd-ball exercise. PET showed the increase in rCBF on the right hemisphere of the brain by coffee aroma. rCBF was measured by PET in 9 of right-handed healthy adults men, and P300 was by event-related potential (ERP) in each sex of 20 right-handed healthy adults. ERP showed the difference of the P300 amplitude between men and women, and showed the tendency, by odors except the lavender oil, that women had higher in the P300 amplitude than men. These results suggest the presence of effects on the cognitive function through emotional actions. Next, the relationship between rCBF and ERP were evaluated. The subjects were 9 of the right-handed healthy adults (average: 25.6±3.4 years old). rCBF by PET and P300 amplitude by ERP were simultaneously recorded during the auditory odd-ball exercise using the tone-burst method (2 kHz of the low frequency aimed stimuli and 1 kHz of the high frequency non-aimed stimuli). The rCBF value was the highest at the transverse gyrus of Heschl and the lowest at the piriform cortex among 24 regions of interest (ROI) from both sides. The difference of P300 peak latent time among ROI was almost the same. The brain waves from Cz and Pz were similar and the average amplitude was highest at Pz. We found the high correlation in the right piriform cortex (Fz), and right (Fz, Cz) and left (Cz, Pz) transverse gyrus of Heschl between the P300 amplitude and rCBF. (K.H.)

  9. Dynamic links between theta executive functions and alpha storage buffers in auditory and visual working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2010-05-01

    Working memory (WM) tasks require not only distinct functions such as a storage buffer and central executive functions, but also coordination among these functions. Neuroimaging studies have revealed the contributions of different brain regions to different functional roles in WM tasks; however, little is known about the neural mechanism governing their coordination. Electroencephalographic (EEG) rhythms, especially theta and alpha, are known to appear over distributed brain regions during WM tasks, but the rhythms associated with task-relevant regional coupling have not been obtained thus far. In this study, we conducted time-frequency analyses for EEG data in WM tasks that include manipulation periods and memory storage buffer periods. We used both auditory WM tasks and visual WM tasks. The results successfully demonstrated function-specific EEG activities. The frontal theta amplitudes increased during the manipulation periods of both tasks. The alpha amplitudes increased during not only the manipulation but also the maintenance periods in the temporal area for the auditory WM and the parietal area for the visual WM. The phase synchronization analyses indicated that, under the relevant task conditions, the temporal and parietal regions show enhanced phase synchronization in the theta bands with the frontal region, whereas phase synchronization between theta and alpha is significantly enhanced only within the individual areas. Our results suggest that WM task-relevant brain regions are coordinated by distant theta synchronization for central executive functions, by local alpha synchronization for the memory storage buffer, and by theta-alpha coupling for inter-functional integration.

  10. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    Science.gov (United States)

    Lavigne, Katie M; Woodward, Todd S

    2018-04-01

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  11. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  12. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey.

    Science.gov (United States)

    Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-01-01

    In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  13. Blast-Induced Tinnitus and Elevated Central Auditory and Limbic Activity in Rats: A Manganese-Enhanced MRI and Behavioral Study.

    Science.gov (United States)

    Ouyang, Jessica; Pace, Edward; Lepczyk, Laura; Kaufman, Michael; Zhang, Jessica; Perrine, Shane A; Zhang, Jinsheng

    2017-07-07

    Blast-induced tinitus is the number one service-connected disability that currently affects military personnel and veterans. To elucidate its underlying mechanisms, we subjected 13 Sprague Dawley adult rats to unilateral 14 psi blast exposure to induce tinnitus and measured auditory and limbic brain activity using manganese-enhanced MRI (MEMRI). Tinnitus was evaluated with a gap detection acoustic startle reflex paradigm, while hearing status was assessed with prepulse inhibition (PPI) and auditory brainstem responses (ABRs). Both anxiety and cognitive functioning were assessed using elevated plus maze and Morris water maze, respectively. Five weeks after blast exposure, 8 of the 13 blasted rats exhibited chronic tinnitus. While acoustic PPI remained intact and ABR thresholds recovered, the ABR wave P1-N1 amplitude reduction persisted in all blast-exposed rats. No differences in spatial cognition were observed, but blasted rats as a whole exhibited increased anxiety. MEMRI data revealed a bilateral increase in activity along the auditory pathway and in certain limbic regions of rats with tinnitus compared to age-matched controls. Taken together, our data suggest that while blast-induced tinnitus may play a role in auditory and limbic hyperactivity, the non-auditory effects of blast and potential traumatic brain injury may also exert an effect.

  14. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  15. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  16. Fluoxetine pretreatment promotes neuronal survival and maturation after auditory fear conditioning in the rat amygdala.

    Directory of Open Access Journals (Sweden)

    Lizhu Jiang

    Full Text Available The amygdala is a critical brain region for auditory fear conditioning, which is a stressful condition for experimental rats. Adult neurogenesis in the dentate gyrus (DG of the hippocampus, known to be sensitive to behavioral stress and treatment of the antidepressant fluoxetine (FLX, is involved in the formation of hippocampus-dependent memories. Here, we investigated whether neurogenesis also occurs in the amygdala and contributes to auditory fear memory. In rats showing persistent auditory fear memory following fear conditioning, we found that the survival of new-born cells and the number of new-born cells that differentiated into mature neurons labeled by BrdU and NeuN decreased in the amygdala, but the number of cells that developed into astrocytes labeled by BrdU and GFAP increased. Chronic pretreatment with FLX partially rescued the reduction in neurogenesis in the amygdala and slightly suppressed the maintenance of the long-lasting auditory fear memory 30 days after the fear conditioning. The present results suggest that adult neurogenesis in the amygdala is sensitive to antidepressant treatment and may weaken long-lasting auditory fear memory.

  17. Insult-induced adaptive plasticity of the auditory system

    Directory of Open Access Journals (Sweden)

    Joshua R Gold

    2014-05-01

    Full Text Available The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighting of connections in neural networks putatively required for optimising performance and behaviour. As an avenue for investigation, studies centred around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple – if not all – levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioural implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism’s competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.

  18. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  19. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    Science.gov (United States)

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  20. Neural biomarkers for dyslexia, ADHD and ADD in the auditory cortex of children

    Directory of Open Access Journals (Sweden)

    Bettina Serrallach

    2016-07-01

    Full Text Available Dyslexia, attention deficit hyperactivity disorder (ADHD, and attention deficit disorder (ADD show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N=147 using neuroimaging, magnet-encephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only a clear discrimination between two subtypes of attentional disorders (ADHD and ADD, a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  1. Right prefrontal rTMS treatment for refractory auditory command hallucinations - a neuroSPECT assisted case study.

    Science.gov (United States)

    Schreiber, Shaul; Dannon, Pinhas N; Goshen, Elinor; Amiaz, Revital; Zwas, Tzila S; Grunhaus, Leon

    2002-11-30

    Auditory command hallucinations probably arise from the patient's failure to monitor his/her own 'inner speech', which is connected to activation of speech perception areas of the left cerebral cortex and to various degrees of dysfunction of cortical circuits involved in schizophrenia as supported by functional brain imaging. We hypothesized that rapid transcranial magnetic stimulation (rTMS), by increasing cortical activation of the right prefrontal brain region, would bring about a reduction of the hallucinations. We report our first schizophrenic patient affected with refractory command hallucinations treated with 10 Hz rTMS. Treatment was performed over the right dorsolateral prefrontal cortex, with 1200 magnetic stimulations administered daily for 20 days at 90% motor threshold. Regional cerebral blood flow changes were monitored with neuroSPECT. Clinical evaluation and scores on the Positive and Negative Symptoms Scale and the Brief Psychiatric Rating Scale demonstrated a global improvement in the patient's condition, with no change in the intensity and frequency of the hallucinations. NeuroSPECT performed at intervals during and after treatment indicated a general improvement in cerebral perfusion. We conclude that right prefrontal rTMS may induce a general clinical improvement of schizophrenic brain function, without directly influencing the mechanism involved in auditory command hallucinations.

  2. Effects of an NMDA antagonist on the auditory mismatch negativity response to transcranial direct current stimulation.

    Science.gov (United States)

    Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner

    2017-05-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.

  3. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  4. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.

    Science.gov (United States)

    Mai, Guangting; Minett, James W; Wang, William S-Y

    2016-06-01

    A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, β, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG signatures relevant to δ, θ, β, and γ oscillations can index phonological and semantic/syntactic organizations

  5. Regional brain morphometry predicts memory rehabilitation outcome after traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Gary E Strangman

    2010-10-01

    Full Text Available Cognitive deficits following traumatic brain injury (TBI commonly include difficulties with memory, attention, and executive dysfunction. These deficits are amenable to cognitive rehabilitation, but optimally selecting rehabilitation programs for individual patients remains a challenge. Recent methods for quantifying regional brain morphometry allow for automated quantification of tissue volumes in numerous distinct brain structures. We hypothesized that such quantitative structural information could help identify individuals more or less likely to benefit from memory rehabilitation. Fifty individuals with TBI of all severities who reported having memory difficulties first underwent structural MRI scanning. They then participated in a 12 session memory rehabilitation program emphasizing internal memory strategies (I-MEMS. Primary outcome measures (HVLT, RBMT were collected at the time of the MRI scan, immediately following therapy, and again at one month post-therapy. Regional brain volumes were used to predict outcome, adjusting for standard predictors (e.g., injury severity, age, education, pretest scores. We identified several brain regions that provided significant predictions of rehabilitation outcome, including the volume of the hippocampus, the lateral prefrontal cortex, the thalamus, and several subregions of the cingulate cortex. The prediction range of regional brain volumes were in some cases nearly equal in magnitude to prediction ranges provided by pretest scores on the outcome variable. We conclude that specific cerebral networks including these regions may contribute to learning during I-MEMS rehabilitation, and suggest that morphometric measures may provide substantial predictive value for rehabilitation outcome in other cognitive interventions as well.

  6. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  7. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  8. Auditory sensory ("echoic") memory dysfunction in schizophrenia.

    Science.gov (United States)

    Strous, R D; Cowan, N; Ritter, W; Javitt, D C

    1995-10-01

    Studies of working memory dysfunction in schizophrenia have focused largely on prefrontal components. This study investigated the integrity of auditory sensory ("echoic") memory, a component that shows little dependence on prefrontal functioning. Echoic memory was investigated in 20 schizophrenic subjects and 20 age- and IQ-matched normal comparison subjects with the use of nondelayed and delayed tone matching. Schizophrenic subjects were markedly impaired in their ability to match two tones after an extremely brief delay between them (300 msec) but were unimpaired when there was no delay between tones. Working memory dysfunction in schizophrenia affects brain regions outside the prefrontal cortex as well as within.

  9. Potential use of MEG to understand abnormalities in auditory function in clinical populations

    Directory of Open Access Journals (Sweden)

    Eric eLarson

    2014-03-01

    Full Text Available Magnetoencephalography (MEG provides a direct, non-invasive view of neural activity with millisecond temporal precision. Recent developments in MEG analysis allow for improved source localization and mapping of connectivity between brain regions, expanding the possibilities for using MEG as a diagnostic tool. In this paper, we first describe inverse imaging methods (e.g., minimum-norm estimation and functional connectivity measures, and how they can provide insights into cortical processing. We then offer a perspective on how these techniques could be used to understand and evaluate auditory pathologies that often manifest during development. Here we focus specifically on how MEG inverse imaging, by providing anatomically-based interpretation of neural activity, may allow us to test which aspects of cortical processing play a role in (central auditory processing disorder ([C]APD. Appropriately combining auditory paradigms with MEG analysis could eventually prove useful for a hypothesis-driven understanding and diagnosis of (CAPD or other disorders, as well as the evaluation of the effectiveness of intervention strategies.

  10. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    gain modulation is mediated primarily through direct projections and they point to future investigations of the differential roles of the direct and indirect projections in corticofugal modulation. In summary, our imaging findings demonstrate the large-scale descending influences, from both the auditory and visual cortices, on sound processing in different IC subdivisions. They can guide future studies on the coordinated activity across multiple regions of the auditory network, and its dysfunctions. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  12. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    Science.gov (United States)

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  13. Normalized regional brain atrophy measurements in multiple sclerosis

    International Nuclear Information System (INIS)

    Zivadinov, Robert; Locatelli, Laura; Stival, Barbara; Bratina, Alessio; Nasuelli, Davide; Zorzon, Marino; Grop, Attilio; Brnabic-Razmilic, Ozana

    2003-01-01

    There is still a controversy regarding the best regional brain atrophy measurements in multiple sclerosis (MS) studies. The aim of this study was to establish whether, in a cross-sectional study, the normalized measurements of regional brain atrophy correlate better with the MRI-defined regional brain lesions than the absolute measurements of regional brain atrophy. We assessed 45 patients with clinically definite relapsing-remitting (RR) MS (median disease duration 12 years), and measured T1-lesion load (LL) and T2-LL of frontal lobes and pons, using a reproducible semi-automated technique. The regional brain parenchymal volume (RBPV) of frontal lobes and pons was obtained by use of a computerized interactive program, which incorporates semi-automated and automated segmentation processes. A normalized measurement, the regional brain parenchymal fraction (RBPF), was calculated as the ratio of RBPV to the total volume of the parenchyma and the cerebrospinal fluid (CSF) in the frontal lobes and in the region of the pons. The total regional brain volume fraction (TRBVF) was obtained after we had corrected for the total volume of the parenchyma and the CSF in the frontal lobes and in the region of the pons for the total intracranial volume. The mean coefficient of variation (CV) for RBPF of the pons was 1% for intra-observer reproducibility and 1.4% for inter-observer reproducibility. Generally, the normalized measurements of regional brain atrophy correlated with regional brain volumes and disability better than did the absolute measurements. RBPF and TRBVF correlated with T2-LL of the pons (r=-0.37, P=0.011, and r= -0.40, P=0.0005 respectively) and with T1-LL of the pons (r=-0.27, P=0.046, and r=-0.31, P=0.04, respectively), whereas RBPV did not (r=-0.18, P = NS). T1-LL of the frontal lobes was related to RBPF (r=-0.32, P=0.033) and TRBVF (r=-0.29, P=0.05), but not to RBPV (R=-0.27, P= NS). There was only a trend of correlation between T2-LL of the frontal lobes and

  14. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  15. Resting-state brain networks revealed by granger causal connectivity in frogs.

    Science.gov (United States)

    Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong

    2016-10-15

    Resting-state networks (RSNs) refer to the spontaneous brain activity generated under resting conditions, which maintain the dynamic connectivity of functional brain networks for automatic perception or higher order cognitive functions. Here, Granger causal connectivity analysis (GCCA) was used to explore brain RSNs in the music frog (Babina daunchina) during different behavioral activity phases. The results reveal that a causal network in the frog brain can be identified during the resting state which reflects both brain lateralization and sexual dimorphism. Specifically (1) ascending causal connections from the left mesencephalon to both sides of the telencephalon are significantly higher than those from the right mesencephalon, while the right telencephalon gives rise to the strongest efferent projections among all brain regions; (2) causal connections from the left mesencephalon in females are significantly higher than those in males and (3) these connections are similar during both the high and low behavioral activity phases in this species although almost all electroencephalograph (EEG) spectral bands showed higher power in the high activity phase for all nodes. The functional features of this network match important characteristics of auditory perception in this species. Thus we propose that this causal network maintains auditory perception during the resting state for unexpected auditory inputs as resting-state networks do in other species. These results are also consistent with the idea that females are more sensitive to auditory stimuli than males during the reproductive season. In addition, these results imply that even when not behaviorally active, the frogs remain vigilant for detecting external stimuli. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  17. Age-and Brain Region-Specific Differences in Mitochondrial ...

    Science.gov (United States)

    Mitochondria are central regulators of energy homeostasis and play a pivotal role in mechanisms of cellular senescence. The objective of the present study was to evaluate mitochondrial bio­-energetic parameters in five brain regions [brainstem (BS), frontal cortex (FC), cerebellum (CER), striatum (STR), hippocampus (HIP)] of four diverse age groups [1 Month (young), 4 Month (adult), 12 Month (middle-aged), 24 Month (old age)] to understand age-related differences in selected brain regions and their contribution to age-related chemical sensitivity. Mitochondrial bioenergetics parameters and enzyme activity were measured under identical conditions across multiple age groups and brain regions in Brown Norway rats (n = 5). The results indicate age- and brain region-specific patterns in mitochondrial functional endpoints. For example, an age-specific decline in ATP synthesis (State 111 respiration) was observed in BS and HIP. Similarly, the maximal respiratory capacities (State V1 and V2) showed age-specific declines in all brain regions examined (young > adult > middle-aged > old age). Amongst all regions, HIP had the greatest change in mitochondrial bioenergetics, showing declines in the 4, 12 and 24 Month age groups. Activities of mitochondrial pyruvate dehydrogenase complex (PDHC) and electron transport chain (ETC) complexes I, II, and IV enzymes were also age- and brain-region specific. In general changes associated with age were more pronounced, with

  18. Self vs. other: neural correlates underlying agent identification based on unimodal auditory information as revealed by electrotomography (sLORETA).

    Science.gov (United States)

    Justen, C; Herbert, C; Werner, K; Raab, M

    2014-02-14

    Recent neuroscientific studies have identified activity changes in an extensive cerebral network consisting of medial prefrontal cortex, precuneus, temporo-parietal junction, and temporal pole during the perception and identification of self- and other-generated stimuli. Because this network is supposed to be engaged in tasks which require agent identification, it has been labeled the evaluation network (e-network). The present study used self- versus other-generated movement sounds (long jumps) and electroencephalography (EEG) in order to unravel the neural dynamics of agent identification for complex auditory information. Participants (N=14) performed an auditory self-other identification task with EEG. Data was then subjected to a subsequent standardized low-resolution brain electromagnetic tomography (sLORETA) analysis (source localization analysis). Differences between conditions were assessed using t-statistics (corrected for multiple testing) on the normalized and log-transformed current density values of the sLORETA images. Three-dimensional sLORETA source localization analysis revealed cortical activations in brain regions mostly associated with the e-network, especially in the medial prefrontal cortex (bilaterally in the alpha-1-band and right-lateralized in the gamma-band) and the temporo-parietal junction (right hemisphere in the alpha-1-band). Taken together, the findings are partly consistent with previous functional neuroimaging studies investigating unimodal visual or multimodal agent identification tasks (cf. e-network) and extent them to the auditory domain. Cortical activations in brain regions of the e-network seem to have functional relevance, especially the significantly higher cortical activation in the right medial prefrontal cortex. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    Science.gov (United States)

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  20. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Reverse Engineering Tone-Deafness: Disrupting Pitch-Matching by Creating Temporary Dysfunctions in the Auditory-Motor Network

    Directory of Open Access Journals (Sweden)

    Anja Hohmann

    2018-01-01

    Full Text Available Perceiving and producing vocal sounds are important functions of the auditory-motor system and are fundamental to communication. Prior studies have identified a network of brain regions involved in pitch production, specifically pitch matching. Here we reverse engineer the function of the auditory perception-production network by targeting specific cortical regions (e.g., right and left posterior superior temporal (pSTG and posterior inferior frontal gyri (pIFG with cathodal transcranial direct current stimulation (tDCS—commonly found to decrease excitability in the underlying cortical region—allowing us to causally test the role of particular nodes in this network. Performance on a pitch-matching task was determined before and after 20 min of cathodal stimulation. Acoustic analyses of pitch productions showed impaired accuracy after cathodal stimulation to the left pIFG and the right pSTG in comparison to sham stimulation. Both regions share particular roles in the feedback and feedforward motor control of pitched vocal production with a differential hemispheric dominance.

  2. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    Science.gov (United States)

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  3. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  4. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    Science.gov (United States)

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.

  5. Hearing in action; auditory properties of neurones in the red nucleus of alert primates

    Directory of Open Access Journals (Sweden)

    Jonathan Murray Lovell

    2014-05-01

    Full Text Available The response of neurones in the Red Nucleus pars magnocellularis (RNm to both tone bursts and electrical stimulation were observed in three cynomolgus monkeys (Macaca fascicularis, in a series of studies primarily designed to characterise the influence of the dopaminergic ventral midbrain on auditory processing. Compared to its role in motor behaviour, little is known about the sensory response properties of neurons in the red nucleus; particularly those concerning the auditory modality. Sites in the RN were recognised by observing electrically evoked body movements characteristic for this deep brain structure. In this study we applied brief monopolar electrical stimulation to 118 deep brain sites at a maximum intensity of 200 µA, thus evoking minimal body movements. Auditory sensitivity of RN neurons was analysed more thoroughly at 15 sites, with the majority exhibiting broad tuning curves and phase locking up to 1.03 kHz. Since the RN appears to receive inputs from a very early stage of the ascending auditory system, our results suggest that sounds can modify the motor control exerted by this brain nucleus. At selected locations, we also tested for the presence of functional connections between the RN and the auditory cortex by inserting additional microelectrodes into the auditory cortex and investigating how action potentials and local field potentials were affected by electrical stimulation of the RN.

  6. Fused cerebral organoids model interactions between brain regions.

    Science.gov (United States)

    Bagley, Joshua A; Reumann, Daniel; Bian, Shan; Lévi-Strauss, Julie; Knoblich, Juergen A

    2017-07-01

    Human brain development involves complex interactions between different regions, including long-distance neuronal migration or formation of major axonal tracts. Different brain regions can be cultured in vitro within 3D cerebral organoids, but the random arrangement of regional identities limits the reliable analysis of complex phenotypes. Here, we describe a coculture method combining brain regions of choice within one organoid tissue. By fusing organoids of dorsal and ventral forebrain identities, we generate a dorsal-ventral axis. Using fluorescent reporters, we demonstrate CXCR4-dependent GABAergic interneuron migration from ventral to dorsal forebrain and describe methodology for time-lapse imaging of human interneuron migration. Our results demonstrate that cerebral organoid fusion cultures can model complex interactions between different brain regions. Combined with reprogramming technology, fusions should offer researchers the possibility to analyze complex neurodevelopmental defects using cells from neurological disease patients and to test potential therapeutic compounds.

  7. Music training relates to the development of neural mechanisms of selective auditory attention.

    Science.gov (United States)

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Auditory cortex involvement in emotional learning and memory.

    Science.gov (United States)

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Brain-wide maps of Fos expression during fear learning and recall.

    Science.gov (United States)

    Cho, Jin-Hyung; Rendall, Sam D; Gray, Jesse M

    2017-04-01

    Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. © 2017 Cho et al.; Published by Cold Spring Harbor Laboratory Press.

  10. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  11. Brain metabolism in autism. Resting cerebral glucose utilization rates as measured with positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Rumsey, J.M.; Duara, R.; Grady, C.; Rapoport, J.L.; Margolin, R.A.; Rapoport, S.I.; Cutler, N.R.

    1985-05-01

    The cerebral metabolic rate for glucose was studied in ten men (mean age = 26 years) with well-documented histories of infantile autism and in 15 age-matched normal male controls using positron emission tomography and (F-18) 2-fluoro-2-deoxy-D-glucose. Positron emission tomography was completed during rest, with reduced visual and auditory stimulation. While the autistic group as a whole showed significantly elevated glucose utilization in widespread regions of the brain, there was considerable overlap between the two groups. No brain region showed a reduced metabolic rate in the autistic group. Significantly more autistic, as compared with control, subjects showed extreme relative metabolic rates (ratios of regional metabolic rates to whole brain rates and asymmetries) in one or more brain regions.

  12. Brain metabolism in autism. Resting cerebral glucose utilization rates as measured with positron emission tomography

    International Nuclear Information System (INIS)

    Rumsey, J.M.; Duara, R.; Grady, C.; Rapoport, J.L.; Margolin, R.A.; Rapoport, S.I.; Cutler, N.R.

    1985-01-01

    The cerebral metabolic rate for glucose was studied in ten men (mean age = 26 years) with well-documented histories of infantile autism and in 15 age-matched normal male controls using positron emission tomography and (F-18) 2-fluoro-2-deoxy-D-glucose. Positron emission tomography was completed during rest, with reduced visual and auditory stimulation. While the autistic group as a whole showed significantly elevated glucose utilization in widespread regions of the brain, there was considerable overlap between the two groups. No brain region showed a reduced metabolic rate in the autistic group. Significantly more autistic, as compared with control, subjects showed extreme relative metabolic rates (ratios of regional metabolic rates to whole brain rates and asymmetries) in one or more brain regions

  13. Regional cerebral blood flow during the auditory oddball task measured by positron emission tomography

    International Nuclear Information System (INIS)

    Mochida, Masahiko

    1997-01-01

    Regional cerebral blood flow (rCBF) was measured by employing PET in nine healthy right-handed male subjects, while they simultaneously performed the auditory oddball task using tone bursts. Results showed that the rCBF value was highest in the transverse gyrus of Heschl in both right and left hemispheres. When comparing the rCBF values between right and left hemispheres, four areas had higher rCBF values in the left hemisphere and eight areas had higher rCBF values in the right hemisphere. Of these, the anterior and posterior parts of the superior temporal gyrus, especially, showed significant differences. The hemispheric differences in the rCBF values of the auditory areas can be attributed to the performance of the oddball task which requires higher processing of non verbal auditory input. The P300 amplitude which reflects the amount of the allocated information processing resources correlated positively with rCBF in the following areas: left piriform cortex, the transverse gyrus of Heschl in both left and right hemispheres. Mean-while, P300 amplitude correlated negatively with rCBF in the nucleus accumbens septi in both right and left hemispheres. The N100 amplitude evoked by frequent stimulus did not correlate with rCBF in almost all ROIs. (K.H.)

  14. Gender-specific effects of prenatal and adolescent exposure to tobacco smoke on auditory and visual attention.

    Science.gov (United States)

    Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R

    2007-12-01

    Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.

  15. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    Science.gov (United States)

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more

  16. Alteration of glycine receptor immunoreactivity in the auditory brainstem of mice following three months of exposure to radiofrequency radiation at SAR 4.0 W/kg.

    Science.gov (United States)

    Maskey, Dhiraj; Kim, Hyung Gun; Suh, Myung-Whan; Roh, Gu Seob; Kim, Myeung Ju

    2014-08-01

    The increasing use of mobile communication has triggered an interest in its possible effects on the regulation of neurotransmitter signals. Due to the close proximity of mobile phones to hearing-related brain regions during usage, its use may lead to a decrease in the ability to segregate sounds, leading to serious auditory dysfunction caused by the prolonged exposure to radiofrequency (RF) radiation. The interplay among auditory processing, excitation and inhibitory molecule interactions plays a major role in auditory function. In particular, inhibitory molecules, such a glycine, are predominantly localized in the auditory brainstem. However, the effects of exposure to RF radiation on auditory function have not been reported to date. Thus, the aim of the present study was to investigate the effects of exposure to RF radiation on glycine receptor (GlyR) immunoreactivity (IR) in the auditory brainstem region at 835 MHz with a specific absorption rate of 4.0 W/kg for three months using free-floating immunohistochemistry. Compared with the sham control (SC) group, a significant loss of staining intensity of neuropils and cells in the different subdivisions of the auditory brainstem regions was observed in the mice exposed to RF radiation (E4 group). A decrease in the number of GlyR immunoreactive cells was also noted in the cochlear nuclear complex [anteroventral cochlear nucleus (AVCN), 31.09%; dorsal cochlear nucleus (DCN), 14.08%; posteroventral cochlear nucleus (PVCN), 32.79%] and the superior olivary complex (SOC) [lateral superior olivary nucleus (LSO), 36.85%; superior paraolivary nucleus (SPN), 24.33%, medial superior olivary nucleus (MSO), 23.23%; medial nucleus of the trapezoid body (MNTB), 10.15%] of the mice in the E4 group. Auditory brainstem response (ABR) analysis also revealed a significant threshold elevation of in the exposed (E4) group, which may be associated with auditory dysfunction. The present study suggests that the auditory brainstem region

  17. Altered intrinsic connectivity of the auditory cortex in congenital amusia.

    Science.gov (United States)

    Leveque, Yohana; Fauvel, Baptiste; Groussard, Mathilde; Caclin, Anne; Albouy, Philippe; Platel, Hervé; Tillmann, Barbara

    2016-07-01

    Congenital amusia, a neurodevelopmental disorder of music perception and production, has been associated with abnormal anatomical and functional connectivity in a right frontotemporal pathway. To investigate whether spontaneous connectivity in brain networks involving the auditory cortex is altered in the amusic brain, we ran a seed-based connectivity analysis, contrasting at-rest functional MRI data of amusic and matched control participants. Our results reveal reduced frontotemporal connectivity in amusia during resting state, as well as an overconnectivity between the auditory cortex and the default mode network (DMN). The findings suggest that the auditory cortex is intrinsically more engaged toward internal processes and less available to external stimuli in amusics compared with controls. Beyond amusia, our findings provide new evidence for the link between cognitive deficits in pathology and abnormalities in the connectivity between sensory areas and the DMN at rest. Copyright © 2016 the American Physiological Society.

  18. Visual and auditory stimuli associated with swallowing. An fMRI study

    International Nuclear Information System (INIS)

    Kawai, Takeshi; Watanabe, Yutaka; Tonogi, Morio; Yamane, Gen-yuki; Abe, Shinichi; Yamada, Yoshiaki; Callan, Akiko

    2009-01-01

    We focused on brain areas activated by audiovisual stimuli related to swallowing motions. In this study, three kinds of stimuli related to human swallowing movement (auditory stimuli alone, visual stimuli alone, or audiovisual stimuli) were presented to the subjects, and activated brain areas were measured using functional MRI (fMRI) and analyzed. When auditory stimuli alone were presented, the supplementary motor area was activated. When visual stimuli alone were presented, the premotor and primary motor areas of the left and right hemispheres and prefrontal area of the left hemisphere were activated. When audiovisual stimuli were presented, the prefrontal and premotor areas of the left and right hemispheres were activated. Activation of Broca's area, which would have been characteristic of mirror neuron system activation on presentation of motion images, was not observed; however, activation of brain areas related to swallowing motion programming and performance was verified for auditory, visual and audiovisual stimuli related to swallowing motion. These results suggest that audiovisual stimuli related to swallowing motion could be applied to the treatment of patients with dysphagia. (author)

  19. Regional changes in brain 2-14C-deoxyglucose uptake induced by convulsant and non-convulsant doses of lindane

    International Nuclear Information System (INIS)

    Sanfeliu, C.; Sola, C.; Camon, L.; Martinez, E.; Rodriguez-Farre, E.

    1990-01-01

    Lindane-induced dose- and time-related changes in regional 2-14C-deoxyglucose (2-DG) uptake were examined in 59 discrete rat brain structures using the 2-DG autoradiographic technique. At different times (0.5-144 hr) after administration of a seizure-inducing single dose of lindane (60 mg/kg), 2-DG uptake was significantly increased in 18 cortical and subcortical regions mainly related to the limbic system (e.g., Ammon's horn, dentate gyrus, septal nuclei, nucleus accumbens, olfactory cortex) and extrapyramidal and sensory-motor areas (e.g., cerebellar cortex, red nucleus, medial vestibular nucleus). There was also a significant increase in superior colliculus layer II. In addition, significant decreases occurred in a group of 6 regions (e.g., auditory and motor cortices). Non-convulsing animals treated with the same dose of lindane showed a regional pattern of 2-DG uptake less modified than the convulsant group. A non-convulsant single dose of lindane (30 mg/kg) also modified significantly the 2-DG uptake (0.5-24 hr) in some brain areas. Although the various single doses of lindane tested produced different altered patterns of brain 2-DG uptake, some structures showed a similar trend in their modification (e.g., superior colliculi and accumbens, raphe and red nuclei). Repeated non-convulsant doses of lindane produced defined and long-lasting significant elevations of 2-DG uptake in some subcortical structures. Considering the treated groups all together, 2-DG uptake increased significantly in 26 of the 59 regions examined but only decreased significantly in 9 of them during the course of lindane effects. This fact can be related to the stimulant action described for this neurotoxic agent. The observed pattern provides a descriptive approach to the functional alterations occurring in vivo during the course of lindane intoxication

  20. Amino acid and acetylcholine chemistry in the central auditory system of young, middle-aged and old rats.

    Science.gov (United States)

    Godfrey, Donald A; Chen, Kejian; O'Toole, Thomas R; Mustapha, Abdurrahman I A A

    2017-07-01

    Older adults generally experience difficulties with hearing. Age-related changes in the chemistry of central auditory regions, especially the chemistry underlying synaptic transmission between neurons, may be of particular relevance for hearing changes. In this study, we used quantitative microchemical methods to map concentrations of amino acids, including the major neurotransmitters of the brain, in all the major central auditory structures of young (6 months), middle-aged (22 months), and old (33 months old) Fischer 344 x Brown Norway rats. In addition, some amino acid measurements were made for vestibular nuclei, and activities of choline acetyltransferase, the enzyme for acetylcholine synthesis, were mapped in the superior olive and auditory cortex. In old, as compared to young, rats, glutamate concentrations were lower throughout central auditory regions. Aspartate and glycine concentrations were significantly lower in many and GABA and taurine concentrations in some cochlear nucleus and superior olive regions. Glutamine concentrations and choline acetyltransferase activities were higher in most auditory cortex layers of old rats as compared to young. Where there were differences between young and old rats, amino acid concentrations in middle-aged rats often lay between those in young and old rats, suggesting gradual changes during adult life. The results suggest that hearing deficits in older adults may relate to decreases in excitatory (glutamate) as well as inhibitory (glycine and GABA) neurotransmitter amino acid functions. Chemical changes measured in aged rats often differed from changes measured after manipulations that directly damage the cochlea, suggesting that chemical changes during aging may not all be secondary to cochlear damage. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus

    Science.gov (United States)

    Lee, Jungah; Groh, Jennifer M.

    2014-01-01

    Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior. PMID:24454779

  2. The neural correlates of coloured music: a functional MRI investigation of auditory-visual synaesthesia.

    Science.gov (United States)

    Neufeld, J; Sinke, C; Dillo, W; Emrich, H M; Szycik, G R; Dima, D; Bleich, S; Zedler, M

    2012-01-01

    In auditory-visual synaesthesia, all kinds of sound can induce additional visual experiences. To identify the brain regions mainly involved in this form of synaesthesia, functional magnetic resonance imaging (fMRI) has been used during non-linguistic sound perception (chords and pure tones) in synaesthetes and non-synaesthetes. Synaesthetes showed increased activation in the left inferior parietal cortex (IPC), an area involved in multimodal integration, feature binding and attention guidance. No significant group-differences could be detected in area V4, which is known to be related to colour vision and form processing. The results support the idea of the parietal cortex acting as sensory nexus area in auditory-visual synaesthesia, and as a common neural correlate for different types of synaesthesia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Top-down modulation of the auditory steady-state response in a task-switch paradigm

    Directory of Open Access Journals (Sweden)

    Nadia Müller

    2009-02-01

    Full Text Available Auditory selective attention is an important mechanism for top-down selection of the vast amount of auditory information our perceptual system is exposed to. In the present study, the impact of attention on auditory steady-state responses - previously shown to be generated in primary auditory regions - was investigated. This issue is still a matter of debate and recent findings point to a complex pattern of attentional effects on the aSSR. The present study aimed at shedding light on the involvement of ipsilateral and contralateral activations to the attended sound taking into account hemispheric differences and a possible dependency on modulation frequency. In aid of this, a dichotic listening experiment was designed using amplitude-modulated tones that were presented to the left and right ear simultaneously. Participants had to detect target tones in a cued ear while their brain activity was assessed using MEG. Thereby, a modulation of the aSSR by attention could be revealed, interestingly restricted to the left hemisphere and 20 Hz responses: Contralateral activations were enhanced while ipsilateral activations turned out to be reduced. Thus, our findings support and extend recent findings, showing that auditory attention can influence the aSSR, but only under specific circumstances and in a complex pattern regarding the different effects for ipsilateral and contralateral activations.

  4. MR and genetics in schizophrenia: Focus on auditory hallucinations

    International Nuclear Information System (INIS)

    Aguilar, Eduardo Jesus; Sanjuan, Julio; Garcia-Marti, Gracian; Lull, Juan Jose; Robles, Montserrat

    2008-01-01

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented

  5. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  6. Whole brain and brain regional coexpression network interactions associated with predisposition to alcohol consumption.

    Directory of Open Access Journals (Sweden)

    Lauren A Vanderlinden

    Full Text Available To identify brain transcriptional networks that may predispose an animal to consume alcohol, we used weighted gene coexpression network analysis (WGCNA. Candidate coexpression modules are those with an eigengene expression level that correlates significantly with the level of alcohol consumption across a panel of BXD recombinant inbred mouse strains, and that share a genomic region that regulates the module transcript expression levels (mQTL with a genomic region that regulates alcohol consumption (bQTL. To address a controversy regarding utility of gene expression profiles from whole brain, vs specific brain regions, as indicators of the relationship of gene expression to phenotype, we compared candidate coexpression modules from whole brain gene expression data (gathered with Affymetrix 430 v2 arrays in the Colorado laboratories and from gene expression data from 6 brain regions (nucleus accumbens (NA; prefrontal cortex (PFC; ventral tegmental area (VTA; striatum (ST; hippocampus (HP; cerebellum (CB available from GeneNetwork. The candidate modules were used to construct candidate eigengene networks across brain regions, resulting in three "meta-modules", composed of candidate modules from two or more brain regions (NA, PFC, ST, VTA and whole brain. To mitigate the potential influence of chromosomal location of transcripts and cis-eQTLs in linkage disequilibrium, we calculated a semi-partial correlation of the transcripts in the meta-modules with alcohol consumption conditional on the transcripts' cis-eQTLs. The function of transcripts that retained the correlation with the phenotype after correction for the strong genetic influence, implicates processes of protein metabolism in the ER and Golgi as influencing susceptibility to variation in alcohol consumption. Integration of these data with human GWAS provides further information on the function of polymorphisms associated with alcohol-related traits.

  7. Brain regions activated by the passive processing of visually- and auditorily-presented words measured by averaged PET images of blood flow change

    International Nuclear Information System (INIS)

    Peterson, S.E.; Fox, P.T.; Posner, M.I.; Raichle, M.E.

    1987-01-01

    A limited number of regions specific to input modality are activated by the auditory and visual presentation of single words. These regions include primary auditory and visual cortex, and modality-specific higher order region that may be performing computations at a word level of analysis

  8. Task-dependent modulation of regions in the left temporal cortex during auditory sentence comprehension.

    Science.gov (United States)

    Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping

    2015-01-01

    Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Is the auditory sensory memory sensitive to visual information?

    Science.gov (United States)

    Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène

    2005-10-01

    The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.

  10. Auditory properties in the parabelt regions of the superior temporal gyrus in the awake macaque monkey: an initial survey.

    Science.gov (United States)

    Kajikawa, Yoshinao; Frey, Stephen; Ross, Deborah; Falchier, Arnaud; Hackett, Troy A; Schroeder, Charles E

    2015-03-11

    The superior temporal gyrus (STG) is on the inferior-lateral brain surface near the external ear. In macaques, 2/3 of the STG is occupied by an auditory cortical region, the "parabelt," which is part of a network of inferior temporal areas subserving communication and social cognition as well as object recognition and other functions. However, due to its location beneath the squamous temporal bone and temporalis muscle, the STG, like other inferior temporal regions, has been a challenging target for physiological studies in awake-behaving macaques. We designed a new procedure for implanting recording chambers to provide direct access to the STG, allowing us to evaluate neuronal properties and their topography across the full extent of the STG in awake-behaving macaques. Initial surveys of the STG have yielded several new findings. Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes comparable to those of responses to 1/3 octave band-pass noise. Mapping results showed longer response latencies in more rostral sites and possible tonotopic patterns parallel to core and belt areas, suggesting the reversal of gradients between caudal and rostral parabelt areas. These results will help further exploration of parabelt areas. Copyright © 2015 the authors 0270-6474/15/354140-11$15.00/0.

  11. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance.

    Science.gov (United States)

    Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk

    2017-05-01

    Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  13. Speaking Two Languages Enhances an Auditory but Not a Visual Neural Marker of Cognitive Inhibition

    Directory of Open Access Journals (Sweden)

    Mercedes Fernandez

    2014-09-01

    Full Text Available The purpose of the present study was to replicate and extend our original findings of enhanced neural inhibitory control in bilinguals. We compared English monolinguals to Spanish/English bilinguals on a non-linguistic, auditory Go/NoGo task while recording event-related brain potentials. New to this study was the visual Go/NoGo task, which we included to investigate whether enhanced neural inhibition in bilinguals extends from the auditory to the visual modality. Results confirmed our original findings and revealed greater inhibition in bilinguals compared to monolinguals. As predicted, compared to monolinguals, bilinguals showed increased N2 amplitude during the auditory NoGo trials, which required inhibitory control, but no differences during the Go trials, which required a behavioral response and no inhibition. Interestingly, during the visual Go/NoGo task, event related brain potentials did not distinguish the two groups, and behavioral responses were similar between the groups regardless of task modality. Thus, only auditory trials that required inhibitory control revealed between-group differences indicative of greater neural inhibition in bilinguals. These results show that experience-dependent neural changes associated with bilingualism are specific to the auditory modality and that the N2 event-related brain potential is a sensitive marker of this plasticity.

  14. Baseline vestibular and auditory findings in a trial of post-concussive syndrome

    Science.gov (United States)

    Meehan, Anna; Searing, Elizabeth; Weaver, Lindell; Lewandowski, Andrew

    2016-01-01

    Previous studies have reported high rates of auditory and vestibular-balance deficits immediately following head injury. This study uses a comprehensive battery of assessments to characterize auditory and vestibular function in 71 U.S. military service members with chronic symptoms following mild traumatic brain injury that did not resolve with traditional interventions. The majority of the study population reported hearing loss (70%) and recent vestibular symptoms (83%). Central auditory deficits were most prevalent, with 58% of participants failing the SCAN3:A screening test and 45% showing abnormal responses on auditory steady-state response testing presented at a suprathreshold intensity. Only 17% of the participants had abnormal hearing (⟩25 dB hearing loss) based on the pure-tone average. Objective vestibular testing supported significant deficits in this population, regardless of whether the participant self-reported active symptoms. Composite score on the Sensory Organization Test was lower than expected from normative data (mean 69.6 ±vestibular tests, vestibulo-ocular reflex, central auditory dysfunction, mild traumatic brain injury, post-concussive symptoms, hearing15.6). High abnormality rates were found in funduscopy torsion (58%), oculomotor assessments (49%), ocular and cervical vestibular evoked myogenic potentials (46% and 33%, respectively), and monothermal calorics (40%). It is recommended that a full peripheral and central auditory, oculomotor, and vestibular-balance evaluation be completed on military service members who have sustained head trauma.

  15. Human capital in European peripheral regions: brain - drain and brain - gain

    NARCIS (Netherlands)

    Coenen, Franciscus H.J.M.

    2004-01-01

    Project goal - The overall goal of the project is to build a legitimate transnational network to transfer ideas and experiences and implement measures to reduce brain drain and foster brain gain while reinforcing the economical and spatial development of peripheral regions in NWE. This means a

  16. Brain-computer interfaces

    DEFF Research Database (Denmark)

    Treder, Matthias S.; Miklody, Daniel; Blankertz, Benjamin

    quality measure'. We were able to show that for stimuli close to the perceptual threshold, there was sometimes a discrepancy between overt responses and brain responses, shedding light on subjects using different response criteria (e.g., more liberal or more conservative). To conclude, brain-computer...... of perceptual and cognitive biases. Furthermore, subjects can only report on stimuli if they have a clear percept of them. On the other hand, the electroencephalogram (EEG), the electrical brain activity measured with electrodes on the scalp, is a more direct measure. It allows us to tap into the ongoing neural...... auditory processing stream. In particular, it can tap brain processes that are pre-conscious or even unconscious, such as the earliest brain responses to sounds stimuli in primary auditory cortex. In a series of studies, we used a machine learning approach to show that the EEG can accurately reflect...

  17. Exploratory study of once-daily transcranial direct current stimulation (tDCS) as a treatment for auditory hallucinations in schizophrenia.

    Science.gov (United States)

    Fröhlich, F; Burrello, T N; Mellin, J M; Cordle, A L; Lustenberger, C M; Gilmore, J H; Jarskog, L F

    2016-03-01

    Auditory hallucinations are resistant to pharmacotherapy in about 25% of adults with schizophrenia. Treatment with noninvasive brain stimulation would provide a welcomed additional tool for the clinical management of auditory hallucinations. A recent study found a significant reduction in auditory hallucinations in people with schizophrenia after five days of twice-daily transcranial direct current stimulation (tDCS) that simultaneously targeted left dorsolateral prefrontal cortex and left temporo-parietal cortex. We hypothesized that once-daily tDCS with stimulation electrodes over left frontal and temporo-parietal areas reduces auditory hallucinations in patients with schizophrenia. We performed a randomized, double-blind, sham-controlled study that evaluated five days of daily tDCS of the same cortical targets in 26 outpatients with schizophrenia and schizoaffective disorder with auditory hallucinations. We found a significant reduction in auditory hallucinations measured by the Auditory Hallucination Rating Scale (F2,50=12.22, PtDCS for treatment of auditory hallucinations and the pronounced response in the sham-treated group in this study contrasts with the previous finding and demonstrates the need for further optimization and evaluation of noninvasive brain stimulation strategies. In particular, higher cumulative doses and higher treatment frequencies of tDCS together with strategies to reduce placebo responses should be investigated. Additionally, consideration of more targeted stimulation to engage specific deficits in temporal organization of brain activity in patients with auditory hallucinations may be warranted. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  18. Regional oligodendrocytopathy and astrocytopathy precede myelin loss and blood-brain barrier disruption in a murine model of osmotic demyelination syndrome.

    Science.gov (United States)

    Bouchat, Joanna; Couturier, Bruno; Marneffe, Catherine; Gankam-Kengne, Fabrice; Balau, Benoît; De Swert, Kathleen; Brion, Jean-Pierre; Poncelet, Luc; Gilloteaux, Jacques; Nicaise, Charles

    2018-03-01

    The osmotic demyelination syndrome (ODS) is a non-primary inflammatory disorder of the central nervous system myelin that is often associated with a precipitous rise of serum sodium concentration. To investigate the physiopathology of ODS in vivo, we generated a novel murine model based on the abrupt correction of chronic hyponatremia. Accordingly, ODS mice developed impairments in brainstem auditory evoked potentials and in grip strength. At 24 hr post-correction, oligodendrocyte markers (APC and Cx47) were downregulated, prior to any detectable demyelination. Oligodendrocytopathy was temporally and spatially correlated with the loss of astrocyte markers (ALDH1L1 and Cx43), and both with the brain areas that will develop demyelination. Oligodendrocytopathy and astrocytopathy were confirmed at the ultrastructural level and culminated with necroptotic cell death, as demonstrated by pMLKL immunoreactivity. At 48 hr post-correction, ODS brains contained pathognomonic demyelinating lesions in the pons, mesencephalon, thalamus and cortical regions. These damages were accompanied by blood-brain barrier (BBB) leakages. Expression levels of IL-1β, FasL, TNFRSF6 and LIF factors were significantly upregulated in the ODS lesions. Quiescent microglial cells type A acquired an activated type B morphology within 24 hr post-correction, and reached type D at 48 hr. In conclusion, this murine model of ODS reproduces the CNS demyelination observed in human pathology and indicates ambiguous causes that is regional vulnerability of oligodendrocytes and astrocytes, while it discards BBB disruption as a primary cause of demyelination. This study also raises new queries about the glial heterogeneity in susceptible brain regions as well as about the early microglial activation associated with ODS. © 2017 Wiley Periodicals, Inc.

  19. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations

    Directory of Open Access Journals (Sweden)

    Gabriella Musacchia

    2017-08-01

    Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.

  20. Auditory memory function in expert chess players.

    Science.gov (United States)

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  1. Recovery function of the human brain stem auditory-evoked potential.

    Science.gov (United States)

    Kevanishvili, Z; Lagidze, Z

    1979-01-01

    Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.

  2. Brain region-dependent differential expression of alpha-synuclein.

    Science.gov (United States)

    Taguchi, Katsutoshi; Watanabe, Yoshihisa; Tsujimura, Atsushi; Tanaka, Masaki

    2016-04-15

    α-Synuclein, the major constituent of Lewy bodies (LBs), is normally expressed in presynapses and is involved in synaptic function. Abnormal intracellular aggregation of α-synuclein is observed as LBs and Lewy neurites in neurodegenerative disorders, such as Parkinson's disease (PD) or dementia with Lewy bodies. Accumulated evidence suggests that abundant intracellular expression of α-synuclein is one of the risk factors for pathological aggregation. Recently, we reported differential expression patterns of α-synuclein between excitatory and inhibitory hippocampal neurons. Here we further investigated the precise expression profile in the adult mouse brain with special reference to vulnerable regions along the progression of idiopathic PD. The results show that α-synuclein was highly expressed in the neuronal cell bodies of some early PD-affected brain regions, such as the olfactory bulb, dorsal motor nucleus of the vagus, and substantia nigra pars compacta. Synaptic expression of α-synuclein was mostly accompanied by expression of vesicular glutamate transporter-1, an excitatory presynaptic marker. In contrast, expression of α-synuclein in the GABAergic inhibitory synapses was different among brain regions. α-Synuclein was clearly expressed in inhibitory synapses in the external plexiform layer of the olfactory bulb, globus pallidus, and substantia nigra pars reticulata, but not in the cerebral cortex, subthalamic nucleus, or thalamus. These results suggest that some neurons in early PD-affected human brain regions express high levels of perikaryal α-synuclein, as happens in the mouse brain. Additionally, synaptic profiles expressing α-synuclein are different in various brain regions. © 2015 Wiley Periodicals, Inc.

  3. No auditory experience, no tinnitus: Lessons from subjects with congenital- and acquired single-sided deafness.

    Science.gov (United States)

    Lee, Sang-Yeon; Nam, Dong Woo; Koo, Ja-Won; De Ridder, Dirk; Vanneste, Sven; Song, Jae-Jin

    2017-10-01

    Recent studies have adopted the Bayesian brain model to explain the generation of tinnitus in subjects with auditory deafferentation. That is, as the human brain works in a Bayesian manner to reduce environmental uncertainty, missing auditory information due to hearing loss may cause auditory phantom percepts, i.e., tinnitus. This type of deafferentation-induced auditory phantom percept should be preceded by auditory experience because the fill-in phenomenon, namely tinnitus, is based upon auditory prediction and the resultant prediction error. For example, a recent animal study observed the absence of tinnitus in cats with congenital single-sided deafness (SSD; Eggermont and Kral, Hear Res 2016). However, no human studies have investigated the presence and characteristics of tinnitus in subjects with congenital SSD. Thus, the present study sought to reveal differences in the generation of tinnitus between subjects with congenital SSD and those with acquired SSD to evaluate the replicability of previous animal studies. This study enrolled 20 subjects with congenital SSD and 44 subjects with acquired SSD and examined the presence and characteristics of tinnitus in the groups. None of the 20 subjects with congenital SSD perceived tinnitus on the affected side, whereas 30 of 44 subjects with acquired SSD experienced tinnitus on the affected side. Additionally, there were significant positive correlations between tinnitus characteristics and the audiometric characteristics of the SSD. In accordance with the findings of the recent animal study, tinnitus was absent in subjects with congenital SSD, but relatively frequent in subjects with acquired SSD, which suggests that the development of tinnitus should be preceded by auditory experience. In other words, subjects with profound congenital peripheral deafferentation do not develop auditory phantom percepts because no auditory predictions are available from the Bayesian brain. Copyright © 2017 Elsevier B.V. All rights

  4. Myelination progression in language-correlated regions in brain of normal children determined by quantitative MRI assessment.

    Science.gov (United States)

    Su, Peijen; Kuan, Chen-Chieh; Kaga, Kimitaka; Sano, Masaki; Mima, Kazuo

    2008-12-01

    To investigate the myelination progression course in language-correlated regions of children with normal brain development by quantitative magnetic resonance imaging (MRI) analysis compared with histological studies. The subjects were 241 neurologically intact neonates, infants and young children (128 boys and 113 girls) who underwent MRI between 2001 and 2007 at the University of Tokyo Hospital, ranging in age from 0 to 429 weeks corrected by postnatal age. To compare their data with adult values, 25 adolescents and adults (14 men and 11 women, aged from 14 to 83 years) were examined as controls. Axial T2-weighted images were obtained using spin-echo sequences at 1.5 T. Subjects with a history of prematurity, birth asphyxia, low Apgar score, seizures, active systemic disease, congenital anomaly, delayed development, infarcts, hemorrhages, brain lesions, or central nervous system malformation were excluded from the analysis. Seven regions of interest in language-correlated areas, namely Broca's area, Wernicke's area, the arcuate fasciculus, and the angular gyrus, as well as their right hemisphere homologous regions, and the auditory cortex, the motor cortex, and the visual cortex were examined. Signal intensity obtained by a region-of-interest methodology progresses from hyper- to hypointensity during myelination. We chose the inferior cerebellar peduncle as the internal standard of maturation. Myelination in all these seven language-correlated regions examined in this study shared the same curve pattern: no myelination was observed at birth, it reached maturation at about 1.5 years of age, and it continued to progress slowly thereafter into adult life. On the basis of scatter plot results, we put these areas into three groups: Group A, which included the motor cortex, the auditory cortex, and the visual cortex, myelinated faster than Group B, which included Broca's area, Wernicke's area, and the angular gyrus before 1.5 years old; Group C, consisting of the

  5. Perceptual processing of a complex auditory context

    DEFF Research Database (Denmark)

    Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas

    The mismatch negativity (MMN) is a brain response elicited by deviants in a series of repetitive sounds. It reflects the perception of change in low-level sound features and reliably measures perceptual auditory memory. However, most MMN studies use simple tone patterns as stimuli, failing...

  6. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Elaheh Moradi

    2017-01-01

    Full Text Available Rey's Auditory Verbal Learning Test (RAVLT is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD, thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50 and RAVLT Percent Forgetting (R = 0.43 in a dataset consisting of 806 AD, mild cognitive impairment (MCI or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  7. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  9. Behavioral and brain pattern differences between acting and observing in an auditory task

    Directory of Open Access Journals (Sweden)

    Ventouras Errikos M

    2009-01-01

    Full Text Available Abstract Background Recent research has shown that errors seem to influence the patterns of brain activity. Additionally current notions support the idea that similar brain mechanisms are activated during acting and observing. The aim of the present study was to examine the patterns of brain activity of actors and observers elicited upon receiving feedback information of the actor's response. Methods The task used in the present research was an auditory identification task that included both acting and observing settings, ensuring concurrent ERP measurements of both participants. The performance of the participants was investigated in conditions of varying complexity. ERP data were analyzed with regards to the conditions of acting and observing in conjunction to correct and erroneous responses. Results The obtained results showed that the complexity induced by cue dissimilarity between trials was a demodulating factor leading to poorer performance. The electrophysiological results suggest that feedback information results in different intensities of the ERP patterns of observers and actors depending on whether the actor had made an error or not. The LORETA source localization method yielded significantly larger electrical activity in the supplementary motor area (Brodmann area 6, the posterior cingulate gyrus (Brodmann area 31/23 and the parietal lobe (Precuneus/Brodmann area 7/5. Conclusion These findings suggest that feedback information has a different effect on the intensities of the ERP patterns of actors and observers depending on whether the actor committed an error. Certain neural systems, including medial frontal area, posterior cingulate gyrus and precuneus may mediate these modulating effects. Further research is needed to elucidate in more detail the neuroanatomical and neuropsychological substrates of these systems.

  10. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    Science.gov (United States)

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  11. Neurons derived from different brain regions are inherently different in vitro: a novel multiregional brain-on-a-chip.

    Science.gov (United States)

    Dauth, Stephanie; Maoz, Ben M; Sheehy, Sean P; Hemphill, Matthew A; Murty, Tara; Macedonia, Mary Kate; Greer, Angie M; Budnik, Bogdan; Parker, Kevin Kit

    2017-03-01

    Brain in vitro models are critically important to developing our understanding of basic nervous system cellular physiology, potential neurotoxic effects of chemicals, and specific cellular mechanisms of many disease states. In this study, we sought to address key shortcomings of current brain in vitro models: the scarcity of comparative data for cells originating from distinct brain regions and the lack of multiregional brain in vitro models. We demonstrated that rat neurons from different brain regions exhibit unique profiles regarding their cell composition, protein expression, metabolism, and electrical activity in vitro. In vivo, the brain is unique in its structural and functional organization, and the interactions and communication between different brain areas are essential components of proper brain function. This fact and the observation that neurons from different areas of the brain exhibit unique behaviors in vitro underline the importance of establishing multiregional brain in vitro models. Therefore, we here developed a multiregional brain-on-a-chip and observed a reduction of overall firing activity, as well as altered amounts of astrocytes and specific neuronal cell types compared with separately cultured neurons. Furthermore, this multiregional model was used to study the effects of phencyclidine, a drug known to induce schizophrenia-like symptoms in vivo, on individual brain areas separately while monitoring downstream effects on interconnected regions. Overall, this work provides a comparison of cells from different brain regions in vitro and introduces a multiregional brain-on-a-chip that enables the development of unique disease models incorporating essential in vivo features. NEW & NOTEWORTHY Due to the scarcity of comparative data for cells from different brain regions in vitro, we demonstrated that neurons isolated from distinct brain areas exhibit unique behaviors in vitro. Moreover, in vivo proper brain function is dependent on the

  12. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    Directory of Open Access Journals (Sweden)

    Stefan eBerti

    2013-07-01

    Full Text Available Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection – which does not rely on predictive coding mechanisms – can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection or are presented within an otherwise silent situation (triggering transient detection. In the event-related brain potentials, deviants elicited the mismatch negativity (MMN while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON. This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior.

  13. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  14. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2012-09-12

    Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this

  15. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  16. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study.

    Science.gov (United States)

    Rosenberg, Oded; Roth, Yiftach; Kotler, Moshe; Zangen, Abraham; Dannon, Pinhas

    2011-02-09

    Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS) over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel) ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session) to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS) as well as the Scale for the Assessment of Positive Symptoms scores (SAPS), Clinical Global Impressions (CGI) scale, and the Scale for Assessment of Negative Symptoms (SANS). This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2%) and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%). In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. This trial is registered with clinicaltrials.gov (identifier: NCT00564096).

  17. Examining frontotemporal connectivity and rTMS in healthy controls: implications for auditory hallucinations in schizophrenia.

    Science.gov (United States)

    Gromann, Paula M; Tracy, Derek K; Giampietro, Vincent; Brammer, Michael J; Krabbendam, Lydia; Shergill, Sukhwinder S

    2012-01-01

    Repetitive transcranial magnetic stimulation (rTMS) has been shown to have clinically beneficial effects in altering the perception of auditory hallucinations (AH) in patients with schizophrenia. However, the mode of action is not clear. Recent neuroimaging findings indicate that rTMS has the potential to induce not only local effects but also changes in remote, functionally connected brain regions. Frontotemporal dysconnectivity has been proposed as a mechanism leading to psychotic symptoms in schizophrenia. The current study examines functional connectivity between temporal and frontal brain regions after rTMS and the implications for AH in schizophrenia. A connectivity analysis was conducted on the fMRI data of 11 healthy controls receiving rTMS, compared with 11 matched subjects receiving sham TMS, to the temporoparietal junction, before engaging in a task associated with robust frontotemporal activation. Compared to the control group, the rTMS group showed an altered frontotemporal connectivity with stronger connectivity between the right temporoparietal cortex and the dorsolateral prefrontal cortex and the angular gyrus. This finding provides preliminary evidence for the hypothesis that normalizing the functional connectivity between the temporoparietal and frontal brain regions may underlie the therapeutic effect of rTMS on AH in schizophrenia.

  18. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  19. Automated recognition of brain region mentions in neuroscience literature

    Directory of Open Access Journals (Sweden)

    Leon French

    2009-09-01

    Full Text Available The ability to computationally extract mentions of neuroanatomical regions from the literature would assist linking to other entities within and outside of an article. Examples include extracting reports of connectivity or region-specific gene expression. To facilitate text mining of neuroscience literature we have created a corpus of manually annotated brain region mentions. The corpus contains 1,377 abstracts with 18,242 brain region annotations. Interannotator agreement was evaluated for a subset of the documents, and was 90.7% and 96.7% for strict and lenient matching respectively. We observed a large vocabulary of over 6,000 unique brain region terms and 17,000 words. For automatic extraction of brain region mentions we evaluated simple dictionary methods and complex natural language processing techniques. The dictionary methods based on neuroanatomical lexicons recalled 36% of the mentions with 57% precision. The best performance was achieved using a conditional random field (CRF with a rich feature set. Features were based on morphological, lexical, syntactic and contextual information. The CRF recalled 76% of mentions at 81% precision, by counting partial matches recall and precision increase to 86% and 92% respectively. We suspect a large amount of error is due to coordinating conjunctions, previously unseen words and brain regions of less commonly studied organisms. We found context windows, lemmatization and abbreviation expansion to be the most informative techniques. The corpus is freely available at http://www.chibi.ubc.ca/WhiteText/.

  20. Brainstem auditory evoked responses in an equine patient population: part I--adult horses.

    Science.gov (United States)

    Aleman, M; Holliday, T A; Nieto, J E; Williams, D C

    2014-01-01

    Brainstem auditory evoked response has been an underused diagnostic modality in horses as evidenced by few reports on the subject. To describe BAER findings, common clinical signs, and causes of hearing loss in adult horses. Study group, 76 horses; control group, 8 horses. Retrospective. BAER records from the Clinical Neurophysiology Laboratory were reviewed from the years of 1982 to 2013. Peak latencies, amplitudes, and interpeak intervals were measured when visible. Horses were grouped under disease categories. Descriptive statistics and a posthoc Bonferroni test were performed. Fifty-seven of 76 horses had BAER deficits. There was no breed or sex predisposition, with the exception of American Paint horses diagnosed with congenital sensorineural deafness. Eighty-six percent (n = 49/57) of the horses were younger than 16 years of age. The most common causes of BAER abnormalities were temporohyoid osteoarthropathy (THO, n = 20/20; abnormalities/total), congenital sensorineural deafness in Paint horses (17/17), multifocal brain disease (13/16), and otitis media/interna (4/4). Auditory loss was bilateral and unilateral in 74% (n = 42/57) and 26% (n = 15/57) of the horses, respectively. The most common causes of bilateral auditory loss were sensorineural deafness, THO, and multifocal brain disease whereas THO and otitis were the most common causes of unilateral deficits. Auditory deficits should be investigated in horses with altered behavior, THO, multifocal brain disease, otitis, and in horses with certain coat and eye color patterns. BAER testing is an objective and noninvasive diagnostic modality to assess auditory function in horses. Copyright © 2014 by the American College of Veterinary Internal Medicine.

  1. GABAA receptors in visual and auditory cortex and neural activity changes during basic visual stimulation

    Directory of Open Access Journals (Sweden)

    Pengmin eQin

    2012-12-01

    Full Text Available Recent imaging studies have demonstrated that levels of resting GABA in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABAA receptors, in the changes in brain activity between the eyes closed (EC and eyes open (EO state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: An EO and EC block design, allowing the modelling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [18F]Flumazenil PET measure GABAA receptor binding potentials. It was demonstrated that the local-to-global ratio of GABAA receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABAA receptor binding potential in the visual cortex also predicts the change of functional connectivity between visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABAA receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  2. Regional growth and atlasing of the developing human brain.

    Science.gov (United States)

    Makropoulos, Antonios; Aljabar, Paul; Wright, Robert; Hüning, Britta; Merchant, Nazakat; Arichi, Tomoki; Tusor, Nora; Hajnal, Joseph V; Edwards, A David; Counsell, Serena J; Rueckert, Daniel

    2016-01-15

    Detailed morphometric analysis of the neonatal brain is required to characterise brain development and define neuroimaging biomarkers related to impaired brain growth. Accurate automatic segmentation of neonatal brain MRI is a prerequisite to analyse large datasets. We have previously presented an accurate and robust automatic segmentation technique for parcellating the neonatal brain into multiple cortical and subcortical regions. In this study, we further extend our segmentation method to detect cortical sulci and provide a detailed delineation of the cortical ribbon. These detailed segmentations are used to build a 4-dimensional spatio-temporal structural atlas of the brain for 82 cortical and subcortical structures throughout this developmental period. We employ the algorithm to segment an extensive database of 420 MR images of the developing brain, from 27 to 45weeks post-menstrual age at imaging. Regional volumetric and cortical surface measurements are derived and used to investigate brain growth and development during this critical period and to assess the impact of immaturity at birth. Whole brain volume, the absolute volume of all structures studied, cortical curvature and cortical surface area increased with increasing age at scan. Relative volumes of cortical grey matter, cerebellum and cerebrospinal fluid increased with age at scan, while relative volumes of white matter, ventricles, brainstem and basal ganglia and thalami decreased. Preterm infants at term had smaller whole brain volumes, reduced regional white matter and cortical and subcortical grey matter volumes, and reduced cortical surface area compared with term born controls, while ventricular volume was greater in the preterm group. Increasing prematurity at birth was associated with a reduction in total and regional white matter, cortical and subcortical grey matter volume, an increase in ventricular volume, and reduced cortical surface area. Copyright © 2015 The Authors. Published by

  3. Dissociable meta-analytic brain networks contribute to coordinated emotional processing.

    Science.gov (United States)

    Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R

    2018-06-01

    Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.

  4. Music training alters the course of adolescent auditory development

    Science.gov (United States)

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  5. Diagnosing Dyslexia: The Screening of Auditory Laterality.

    Science.gov (United States)

    Johansen, Kjeld

    A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…

  6. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  8. Distinction of neurochemistry between the cores and their shells of auditory nuclei in tetrapod species.

    Science.gov (United States)

    Zeng, ShaoJu; Li, Jia; Zhang, XinWen; Zuo, MingXue

    2007-01-01

    The distribution of Met-enkephalin (ENK), substance P (SP) and serotonin (5-HT) differs between the core and shell regions of the mesencephalic and diencephalic auditory nuclei of the turtle [Belekhova et al., 2002]. These neurochemical distinctions are also found in other tetrapods (mammals, birds and amphibians). The distribution of ENK, SP and 5-HT was examined in the core and shell regions of both mesencephalic and diencephalic auditory nuclei, and in the telencephalic auditory areas of Bengalese finches (Lonchura striata) and mice (Mus musculus), as well as in corresponding auditory areas in toads (Bufo bufo). ENK, SP and 5-HT immunoreactive fibers and perikarya were largely absent from the core regions of both mesencephalic and diencephalic auditory nuclei, in comparison with the shell regions of mice and Bengalese finches. In the toad, however, this pattern was observed in the mesencephalic auditory nucleus, but not in the diencephalic auditory areas. ENK and SP immunoreactive perikarya were detected in the telencephalic auditory area of mice, whereas no ENK, SP or 5-HT immunolabeling was observed in the telencephalic auditory area (Field L) of Bengalese finches. These findings are discussed in terms of the evolution of the core-and-shell organization of auditory nuclei of tetrapods. Copyright 2007 S. Karger AG, Basel.

  9. Brain regions involved in observing and trying to interpret dog behaviour.

    Science.gov (United States)

    Desmet, Charlotte; van der Wiel, Alko; Brass, Marcel

    2017-01-01

    Humans and dogs have interacted for millennia. As a result, humans (and especially dog owners) sometimes try to interpret dog behaviour. While there is extensive research on the brain regions that are involved in mentalizing about other peoples' behaviour, surprisingly little is known of whether we use these same brain regions to mentalize about animal behaviour. In this fMRI study we investigate whether brain regions involved in mentalizing about human behaviour are also engaged when observing dog behaviour. Here we show that these brain regions are more engaged when observing dog behaviour that is difficult to interpret compared to dog behaviour that is easy to interpret. Interestingly, these results were not only obtained when participants were instructed to infer reasons for the behaviour but also when they passively viewed the behaviour, indicating that these brain regions are activated by spontaneous mentalizing processes.

  10. Cortical evoked potentials to an auditory illusion: binaural beats.

    Science.gov (United States)

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi

    2009-08-01

    To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.

  11. Memory networks in tinnitus: a functional brain image study.

    Directory of Open Access Journals (Sweden)

    Maura Regina Laureano

    Full Text Available Tinnitus is characterized by the perception of sound in the absence of an external auditory stimulus. The network connectivity of auditory and non-auditory brain structures associated with emotion, memory and attention are functionally altered in debilitating tinnitus. Current studies suggest that tinnitus results from neuroplastic changes in the frontal and limbic temporal regions. The objective of this study was to use Single-Photon Emission Computed Tomography (SPECT to evaluate changes in the cerebral blood flow in tinnitus patients with normal hearing compared with healthy controls.Twenty tinnitus patients with normal hearing and 17 healthy controls, matched for sex, age and years of education, were subjected to Single Photon Emission Computed Tomography using the radiotracer ethylenedicysteine diethyl ester, labeled with Technetium 99 m (99 mTc-ECD SPECT. The severity of tinnitus was assessed using the "Tinnitus Handicap Inventory" (THI. The images were processed and analyzed using "Statistical Parametric Mapping" (SPM8.A significant increase in cerebral perfusion in the left parahippocampal gyrus (pFWE <0.05 was observed in patients with tinnitus compared with healthy controls. The average total THI score was 50.8+18.24, classified as moderate tinnitus.It was possible to identify significant changes in the limbic system of the brain perfusion in tinnitus patients with normal hearing, suggesting that central mechanisms, not specific to the auditory pathway, are involved in the pathophysiology of symptoms, even in the absence of clinically diagnosed peripheral changes.

  12. No counterpart of visual perceptual echoes in the auditory system.

    Directory of Open Access Journals (Sweden)

    Barkın İlhan

    Full Text Available It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ~10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ~10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8-13 Hz oscillations play a special role in vision.

  13. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  14. Congenital amusia persists in the developing brain after daily music listening.

    Science.gov (United States)

    Mignault Goulet, Geneviève; Moreau, Patricia; Robitaille, Nicolas; Peretz, Isabelle

    2012-01-01

    Congenital amusia is a neurodevelopmental disorder that affects about 3% of the adult population. Adults experiencing this musical disorder in the absence of macroscopically visible brain injury are described as cases of congenital amusia under the assumption that the musical deficits have been present from birth. Here, we show that this disorder can be expressed in the developing brain. We found that (10-13 year-old) children exhibit a marked deficit in the detection of fine-grained pitch differences in both musical and acoustical context in comparison to their normally developing peers comparable in age and general intelligence. This behavioral deficit could be traced down to their abnormal P300 brain responses to the detection of subtle pitch changes. The altered pattern of electrical activity does not seem to arise from an anomalous functioning of the auditory cortex, because all early components of the brain potentials, the N100, the MMN, and the P200 appear normal. Rather, the brain and behavioral measures point to disrupted information propagation from the auditory cortex to other cortical regions. Furthermore, the behavioral and neural manifestations of the disorder remained unchanged after 4 weeks of daily musical listening. These results show that congenital amusia can be detected in childhood despite regular musical exposure and normal intellectual functioning.

  15. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  16. Neural Correlates of Realistic and Unrealistic Auditory Space Perception

    Directory of Open Access Journals (Sweden)

    Akiko Callan

    2011-10-01

    Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.

  17. GABA(A) receptors in visual and auditory cortex and neural activity changes during basic visual stimulation.

    Science.gov (United States)

    Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg

    2012-01-01

    Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  18. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    OpenAIRE

    Matusz, P.J.; Thelen, A.; Amrein, S.; Geiser, E.; Anken, J.; Murray, M.M.

    2015-01-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a ...

  19. Co-speech gestures influence neural activity in brain regions associated with processing semantic information.

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Hasson, Uri; Skipper, Jeremy I; Small, Steven L

    2009-11-01

    Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.

  20. Quantitative expression profile of distinct functional regions in the adult mouse brain.

    Directory of Open Access Journals (Sweden)

    Takeya Kasukawa

    Full Text Available The adult mammalian brain is composed of distinct regions with specialized roles including regulation of circadian clocks, feeding, sleep/awake, and seasonal rhythms. To find quantitative differences of expression among such various brain regions, we conducted the BrainStars (B* project, in which we profiled the genome-wide expression of ∼50 small brain regions, including sensory centers, and centers for motion, time, memory, fear, and feeding. To avoid confounds from temporal differences in gene expression, we sampled each region every 4 hours for 24 hours, and pooled the samples for DNA-microarray assays. Therefore, we focused on spatial differences in gene expression. We used informatics to identify candidate genes with expression changes showing high or low expression in specific regions. We also identified candidate genes with stable expression across brain regions that can be used as new internal control genes, and ligand-receptor interactions of neurohormones and neurotransmitters. Through these analyses, we found 8,159 multi-state genes, 2,212 regional marker gene candidates for 44 small brain regions, 915 internal control gene candidates, and 23,864 inferred ligand-receptor interactions. We also found that these sets include well-known genes as well as novel candidate genes that might be related to specific functions in brain regions. We used our findings to develop an integrated database (http://brainstars.org/ for exploring genome-wide expression in the adult mouse brain, and have made this database openly accessible. These new resources will help accelerate the functional analysis of the mammalian brain and the elucidation of its regulatory network systems.

  1. Auditory evoked field measurement using magneto-impedance sensors

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K., E-mail: o-kabou@echo.nuee.nagoya-u.ac.jp; Tajima, S.; Song, D.; Uchiyama, T. [Graduate School of Engineering, Nagoya University, Nagoya (Japan); Hamada, N.; Cai, C. [Aichi Steel Corporation, Tokai (Japan)

    2015-05-07

    The magnetic field of the human brain is extremely weak, and it is mostly measured and monitored in the magnetoencephalography method using superconducting quantum interference devices. In this study, in order to measure the weak magnetic field of the brain, we constructed a Magneto-Impedance sensor (MI sensor) system that can cancel out the background noise without any magnetic shield. Based on our previous studies of brain wave measurements, we used two MI sensors in this system for monitoring both cerebral hemispheres. In this study, we recorded and compared the auditory evoked field signals of the subject, including the N100 (or N1) and the P300 (or P3) brain waves. The results suggest that the MI sensor can be applied to brain activity measurement.

  2. Superior pre-attentive auditory processing in musicians.

    Science.gov (United States)

    Koelsch, S; Schröger, E; Tervaniemi, M

    1999-04-26

    The present study focuses on influences of long-term experience on auditory processing, providing the first evidence for pre-attentively superior auditory processing in musicians. This was revealed by the brain's automatic change-detection response, which is reflected electrically as the mismatch negativity (MMN) and generated by the operation of sensoric (echoic) memory, the earliest cognitive memory system. Major chords and single tones were presented to both professional violinists and non-musicians under ignore and attend conditions. Slightly impure chords, presented among perfect major chords elicited a distinct MMN in professional musicians, but not in non-musicians. This demonstrates that compared to non-musicians, musicians are superior in pre-attentively extracting more information out of musically relevant stimuli. Since effects of long-term experience on pre-attentive auditory processing have so far been reported for language-specific phonemes only, results indicate that sensory memory mechanisms can be modulated by training on a more general level.

  3. Deep transcranial magnetic stimulation for the treatment of auditory hallucinations: a preliminary open-label study

    Directory of Open Access Journals (Sweden)

    Zangen Abraham

    2011-02-01

    Full Text Available Abstract Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS as well as the Scale for the Assessment of Positive Symptoms scores (SAPS, Clinical Global Impressions (CGI scale, and the Scale for Assessment of Negative Symptoms (SANS. Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2% and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%. Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096.

  4. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  5. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  6. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    Science.gov (United States)

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  8. Effects of musical training on the auditory cortex in children.

    Science.gov (United States)

    Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E

    2003-11-01

    Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.

  9. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a

  10. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  11. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  12. Central region morphometry in a child brain; Age and gender ...

    African Journals Online (AJOL)

    Background: Data on central region morphometry of a child brain is important not only in terms of providing us with information about central region anatomy of the brain but also in terms of the help of this information for the plans to be applied in neurosurgery. Objective: In the present study, central region morphometry of a ...

  13. Auditory conflict resolution correlates with medial-lateral frontal theta/alpha phase synchrony.

    Science.gov (United States)

    Huang, Samantha; Rossi, Stephanie; Hämäläinen, Matti; Ahveninen, Jyrki

    2014-01-01

    When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC) and dorsolateral prefrontal cortices (DLPFC), work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right), sounding out either the letters "A" or "O". They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50%) or incongruent (50%) with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF) and DLPFC, as well as by increased pre-stimulus gamma (60-110 Hz) power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance.

  14. Auditory Conflict Resolution Correlates with Medial–Lateral Frontal Theta/Alpha Phase Synchrony

    Science.gov (United States)

    Huang, Samantha; Rossi, Stephanie; Hämäläinen, Matti; Ahveninen, Jyrki

    2014-01-01

    When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC) and dorsolateral prefrontal cortices (DLPFC), work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right), sounding out either the letters “A” or “O”. They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50%) or incongruent (50%) with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF) and DLPFC, as well as by increased pre-stimulus gamma (60–110 Hz) power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance. PMID:25343503

  15. Auditory conflict resolution correlates with medial-lateral frontal theta/alpha phase synchrony.

    Directory of Open Access Journals (Sweden)

    Samantha Huang

    Full Text Available When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC and dorsolateral prefrontal cortices (DLPFC, work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right, sounding out either the letters "A" or "O". They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50% or incongruent (50% with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF and DLPFC, as well as by increased pre-stimulus gamma (60-110 Hz power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance.

  16. Dual Gamma Rhythm Generators Control Interlaminar Synchrony in Auditory Cortex

    Science.gov (United States)

    Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O.; Roopun, Anita K.; Traub, Roger D.; Kopell, Nancy J.; Whittington, Miles A.

    2013-01-01

    Rhythmic activity in populations of cortical neurons accompanies, and may underlie, many aspects of primary sensory processing and short-term memory. Activity in the gamma band (30 Hz up to > 100 Hz) is associated with such cognitive tasks and is thought to provide a substrate for temporal coupling of spatially separate regions of the brain. However, such coupling requires close matching of frequencies in co-active areas, and because the nominal gamma band is so spectrally broad, it may not constitute a single underlying process. Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex. A persistent, 30 – 45 Hz, gap-junction-dependent gamma rhythm dominates rhythmic activity in supragranular layers 2/3, whereas a tonic depolarization-dependent, 50 – 80 Hz, pyramidal/interneuron gamma rhythm is expressed in granular layer 4 with strong glutamatergic excitation. As a consequence, altering the degree of excitation of the auditory cortex causes bifurcation in the gamma frequency spectrum and can effectively switch temporal control of layer 5 from supragranular to granular layers. Computational modeling predicts the pattern of interlaminar connections may help to stabilize this bifurcation. The data suggest that different strategies are used by primary auditory cortex to represent weak and strong inputs, with principal cell firing rate becoming increasingly important as excitation strength increases. PMID:22114273

  17. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  18. Carnosine reverses the aging-induced down regulation of brain regional serotonergic system.

    Science.gov (United States)

    Banerjee, Soumyabrata; Ghosh, Tushar K; Poddar, Mrinal K

    2015-12-01

    The purpose of the present investigation was to study the role of carnosine, an endogenous dipeptide biomolecule, on brain regional (cerebral cortex, hippocampus, hypothalamus and pons-medulla) serotonergic system during aging. Results showed an aging-induced brain region specific significant (a) increase in Trp (except cerebral cortex) and their 5-HIAA steady state level with an increase in their 5-HIAA accumulation and declination, (b) decrease in their both 5-HT steady state level and 5-HT accumulation (except cerebral cortex). A significant decrease in brain regional 5-HT/Trp ratio (except cerebral cortex) and increase in 5-HIAA/5-HT ratio were also observed during aging. Carnosine at lower dosages (0.5-1.0μg/Kg/day, i.t. for 21 consecutive days) didn't produce any significant response in any of the brain regions, but higher dosages (2.0-2.5μg/Kg/day, i.t. for 21 consecutive days) showed a significant response on those aging-induced brain regional serotonergic parameters. The treatment with carnosine (2.0μg/Kg/day, i.t. for 21 consecutive days), attenuated these brain regional aging-induced serotonergic parameters and restored towards their basal levels that observed in 4 months young control rats. These results suggest that carnosine attenuates and restores the aging-induced brain regional down regulation of serotonergic system towards that observed in young rats' brain regions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  20. Children's Performance on Pseudoword Repetition Depends on Auditory Trace Quality: Evidence from Event-Related Potentials.

    Science.gov (United States)

    Ceponiene, Rita; Service, Elisabet; Kurjenluoma, Sanna; Cheour, Marie; Naatanen, Risto

    1999-01-01

    Compared the mismatch-negativity (MMN) component of auditory event-related brain potentials to explore the relationship between phonological short-term memory and auditory-sensory processing in 7- to 9-year olds scoring the highest and lowest on a pseudoword repetition test. Found that high and low repeaters differed in MMN amplitude to speech…

  1. Regional homogeneity changes in prelingually deafened patients: a resting-state fMRI study

    Science.gov (United States)

    Li, Wenjing; He, Huiguang; Xian, Junfang; Lv, Bin; Li, Meng; Li, Yong; Liu, Zhaohui; Wang, Zhenchang

    2010-03-01

    Resting-state functional magnetic resonance imaging (fMRI) is a technique that measures the intrinsic function of brain and has some advantages over task-induced fMRI. Regional homogeneity (ReHo) assesses the similarity of the time series of a given voxel with its nearest neighbors on a voxel-by-voxel basis, which reflects the temporal homogeneity of the regional BOLD signal. In the present study, we used the resting state fMRI data to investigate the ReHo changes of the whole brain in the prelingually deafened patients relative to normal controls. 18 deaf patients and 22 healthy subjects were scanned. Kendall's coefficient of concordance (KCC) was calculated to measure the degree of regional coherence of fMRI time courses. We found that regional coherence significantly decreased in the left frontal lobe, bilateral temporal lobes and right thalamus, and increased in the postcentral gyrus, cingulate gyrus, left temporal lobe, left thalamus and cerebellum in deaf patients compared with controls. These results show that the prelingually deafened patients have higher degree of regional coherence in the paleocortex, and lower degree in neocortex. Since neocortex plays an important role in the development of auditory, these evidences may suggest that the deaf persons reorganize the paleocortex to offset the loss of auditory.

  2. Neural biomarkers for dyslexia, ADHD and ADD in the auditory cortex of children

    OpenAIRE

    Bettina Serrallach; Christine Gross; Valdis Bernhofs; Dorte Engelmann; Jan Benner; Jan Benner; Nadine Gündert; Maria Blatow; Martina Wengenroth; Angelika Seitz; Monika Brunner; Stefan Seither; Stefan Seither; Richard Parncutt; Peter Schneider

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N=147) using neuroimaging, magnet-encephalography and psychoacoustics. All disorder subgroups exhibited an ...

  3. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Goycoolea, Marcos; Mena, Ismael; Neubauer, Sonia

    2004-01-01

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  4. Topography of sound level representation in the FM sweep selective region of the pallid bat auditory cortex.

    Science.gov (United States)

    Measor, Kevin; Yarrow, Stuart; Razak, Khaleel A

    2018-05-26

    Sound level processing is a fundamental function of the auditory system. To determine how the cortex represents sound level, it is important to quantify how changes in level alter the spatiotemporal structure of cortical ensemble activity. This is particularly true for echolocating bats that have control over, and often rapidly adjust, call level to actively change echo level. To understand how cortical activity may change with sound level, here we mapped response rate and latency changes with sound level in the auditory cortex of the pallid bat. The pallid bat uses a 60-30 kHz downward frequency modulated (FM) sweep for echolocation. Neurons tuned to frequencies between 30 and 70 kHz in the auditory cortex are selective for the properties of FM sweeps used in echolocation forming the FM sweep selective region (FMSR). The FMSR is strongly selective for sound level between 30 and 50 dB SPL. Here we mapped the topography of level selectivity in the FMSR using downward FM sweeps and show that neurons with more monotonic rate level functions are located in caudomedial regions of the FMSR overlapping with high frequency (50-60 kHz) neurons. Non-monotonic neurons dominate the FMSR, and are distributed across the entire region, but there is no evidence for amplitopy. We also examined how first spike latency of FMSR neurons change with sound level. The majority of FMSR neurons exhibit paradoxical latency shift wherein the latency increases with sound level. Moreover, neurons with paradoxical latency shifts are more strongly level selective and are tuned to lower sound level than neurons in which latencies decrease with level. These data indicate a clustered arrangement of neurons according to monotonicity, with no strong evidence for finer scale topography, in the FMSR. The latency analysis suggests mechanisms for strong level selectivity that is based on relative timing of excitatory and inhibitory inputs. Taken together, these data suggest how the spatiotemporal

  5. rTMS Induced Tinnitus Relief Is Related to an Increase in Auditory Cortical Alpha Activity

    Science.gov (United States)

    Müller, Nadia; Lorenz, Isabel; Langguth, Berthold; Weisz, Nathan

    2013-01-01

    Chronic tinnitus, the continuous perception of a phantom sound, is a highly prevalent audiological symptom. A promising approach for the treatment of tinnitus is repetitive transcranial magnetic stimulation (rTMS) as this directly affects tinnitus-related brain activity. Several studies indeed show tinnitus relief after rTMS, however effects are moderate and vary strongly across patients. This may be due to a lack of knowledge regarding how rTMS affects oscillatory activity in tinnitus sufferers and which modulations are associated with tinnitus relief. In the present study we examined the effects of five different stimulation protocols (including sham) by measuring tinnitus loudness and tinnitus-related brain activity with Magnetoencephalography before and after rTMS. Changes in oscillatory activity were analysed for the stimulated auditory cortex as well as for the entire brain regarding certain frequency bands of interest (delta, theta, alpha, gamma). In line with the literature the effects of rTMS on tinnitus loudness varied strongly across patients. This variability was also reflected in the rTMS effects on oscillatory activity. Importantly, strong reductions in tinnitus loudness were associated with increases in alpha power in the stimulated auditory cortex, while an unspecific decrease in gamma and alpha power, particularly in left frontal regions, was linked to an increase in tinnitus loudness. The identification of alpha power increase as main correlate for tinnitus reduction sheds further light on the pathophysiology of tinnitus. This will hopefully stimulate the development of more effective therapy approaches. PMID:23390539

  6. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  7. Visual attention modulates brain activation to angry voices.

    Science.gov (United States)

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  8. Regional distribution of serotonin transporter protein in postmortem human brain

    International Nuclear Information System (INIS)

    Kish, Stephen J.; Furukawa, Yoshiaki; Chang Lijan; Tong Junchao; Ginovart, Nathalie; Wilson, Alan; Houle, Sylvain; Meyer, Jeffrey H.

    2005-01-01

    Introduction: The primary approach in assessing the status of brain serotonin neurons in human conditions such as major depression and exposure to the illicit drug ecstasy has been the use of neuroimaging procedures involving radiotracers that bind to the serotonin transporter (SERT). However, there has been no consistency in the selection of a 'SERT-free' reference region for the estimation of free and nonspecific binding, as occipital cortex, cerebellum and white matter have all been employed. Objective and Methods: To identify areas of human brain that might have very low SERT levels, we measured, by a semiquantitative Western blotting procedure, SERT protein immunoreactivity throughout the postmortem brain of seven normal adult subjects. Results: Serotonin transporter could be quantitated in all examined brain areas. However, the SERT concentration in cerebellar cortex and white matter were only at trace values, being approximately 20% of average cerebral cortex and 5% of average striatum values. Conclusion: Although none of the examined brain areas are completely free of SERT, human cerebellar cortex has low SERT binding as compared to other examined brain regions, with the exception of white matter. Since the cerebellar cortical SERT binding is not zero, this region will not be a suitable reference region for SERT radioligands with very low free and nonspecific binding. For SERT radioligands with reasonably high free and nonspecific binding, the cerebellar cortex should be a useful reference region, provided other necessary radioligand assumptions are met

  9. Regional distribution of serotonin transporter protein in postmortem human brain

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Stephen J. [Human Neurochemical Pathology Laboratory, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada)]. E-mail: Stephen_Kish@CAMH.net; Furukawa, Yoshiaki [Human Neurochemical Pathology Laboratory, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada); Chang Lijan [Human Neurochemical Pathology Laboratory, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada); Tong Junchao [Human Neurochemical Pathology Laboratory, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada); Ginovart, Nathalie [PET Centre, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada); Wilson, Alan [PET Centre, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada); Houle, Sylvain [PET Centre, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada); Meyer, Jeffrey H. [PET Centre, Centre for Addiction and Mental Health, Toronto, ON, M5T 1R8 (Canada)

    2005-02-01

    Introduction: The primary approach in assessing the status of brain serotonin neurons in human conditions such as major depression and exposure to the illicit drug ecstasy has been the use of neuroimaging procedures involving radiotracers that bind to the serotonin transporter (SERT). However, there has been no consistency in the selection of a 'SERT-free' reference region for the estimation of free and nonspecific binding, as occipital cortex, cerebellum and white matter have all been employed. Objective and Methods: To identify areas of human brain that might have very low SERT levels, we measured, by a semiquantitative Western blotting procedure, SERT protein immunoreactivity throughout the postmortem brain of seven normal adult subjects. Results: Serotonin transporter could be quantitated in all examined brain areas. However, the SERT concentration in cerebellar cortex and white matter were only at trace values, being approximately 20% of average cerebral cortex and 5% of average striatum values. Conclusion: Although none of the examined brain areas are completely free of SERT, human cerebellar cortex has low SERT binding as compared to other examined brain regions, with the exception of white matter. Since the cerebellar cortical SERT binding is not zero, this region will not be a suitable reference region for SERT radioligands with very low free and nonspecific binding. For SERT radioligands with reasonably high free and nonspecific binding, the cerebellar cortex should be a useful reference region, provided other necessary radioligand assumptions are met.

  10. Listening to humans walking together activates the social brain circuitry.

    Science.gov (United States)

    Saarela, Miiamaaria V; Hari, Riitta

    2008-01-01

    Human footsteps carry a vast amount of social information, which is often unconsciously noted. Using functional magnetic resonance imaging, we analyzed brain networks activated by footstep sounds of one or two persons walking. Listening to two persons walking together activated brain areas previously associated with affective states and social interaction, such as the subcallosal gyrus bilaterally, the right temporal pole, and the right amygdala. These areas seem to be involved in the analysis of persons' identity and complex social stimuli on the basis of auditory cues. Single footsteps activated only the biological motion area in the posterior STS region. Thus, hearing two persons walking together involved a more widespread brain network than did hearing footsteps from a single person.

  11. Acoustic Trauma Changes the Parvalbumin-Positive Neurons in Rat Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Congli Liu

    2018-01-01

    Full Text Available Acoustic trauma is being reported to damage the auditory periphery and central system, and the compromised cortical inhibition is involved in auditory disorders, such as hyperacusis and tinnitus. Parvalbumin-containing neurons (PV neurons, a subset of GABAergic neurons, greatly shape and synchronize neural network activities. However, the change of PV neurons following acoustic trauma remains to be elucidated. The present study investigated how auditory cortical PV neurons change following unilateral 1 hour noise exposure (left ear, one octave band noise centered at 16 kHz, 116 dB SPL. Noise exposure elevated the auditory brainstem response threshold of the exposed ear when examined 7 days later. More detectable PV neurons were observed in both sides of the auditory cortex of noise-exposed rats when compared to control. The detectable PV neurons of the left auditory cortex (ipsilateral to the exposed ear to noise exposure outnumbered those of the right auditory cortex (contralateral to the exposed ear. Quantification of Western blotted bands revealed higher expression level of PV protein in the left cortex. These findings of more active PV neurons in noise-exposed rats suggested that a compensatory mechanism might be initiated to maintain a stable state of the brain.

  12. Intracerebral neural stem cell transplantation improved the auditory of mice with presbycusis.

    Science.gov (United States)

    Ren, Hongmiao; Chen, Jichuan; Wang, Yinan; Zhang, Shichang; Zhang, Bo

    2013-01-01

    Stem cell-based regenerative therapy is a potential cellular therapeutic strategy for patients with incurable brain diseases. Embryonic neural stem cells (NSCs) represent an attractive cell source in regenerative medicine strategies in the treatment of diseased brains. Here, we assess the capability of intracerebral embryonic NSCs transplantation for C57BL/6J mice with presbycusis in vivo. Morphology analyses revealed that the neuronal rate of apoptosis was lower in the aged group (10 months of age) but not in the young group (2 months of age) after NSCs transplantation, while the electrophysiological data suggest that the Auditory Brain Stem Response (ABR) threshold was significantly decreased in the aged group at 2 weeks and 3 weeks after transplantation. By contrast, there was no difference in the aged group at 4 weeks post-transplantation or in the young group at any time post-transplantation. Furthermore, immunofluorescence experiments showed that NSCs differentiated into neurons that engrafted and migrated to the brain, even to sites of lesions. Together, our results demonstrate that NSCs transplantation improve the auditory of C57BL/6J mice with presbycusis.

  13. Resource allocation models of auditory working memory.

    Science.gov (United States)

    Joseph, Sabine; Teki, Sundeep; Kumar, Sukhbinder; Husain, Masud; Griffiths, Timothy D

    2016-06-01

    Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The brain stem function in patients with brain bladder

    International Nuclear Information System (INIS)

    Takahashi, Toshihiro

    1990-01-01

    A syndrome of detrusor-sphincter dyssynergia (DSD) is occasionally found in patients with brain bladder. To evaluate the brain stem function in cases of brain bladder, urodynamic study, dynamic CT scan of the brain stem (DCT) and auditory brainstem response (ABR) were performed. The region of interest of DCT aimed at the posterolateral portion of the pons. The results were analysed in contrast with the presense of DSD in urodynamic study. DCT studies were performed in 13 cases with various brain diseases and 5 control cases without neurological diseases. Abnormal patterns of the time-density curve consisted of low peak value, prolongation of filling time and low rapid washout ratio (low clearance ratio) of the contrast medium. Four of 6 cases with DSD showed at least one of the abnormal patterns of the time-density curve bilaterally. In 7 cases without DSD none showed bilateral abnormality of the curve and in 2 of 7 cases only unilateral abnormality was found. ABR was performed in 8 patients with brain diseases. The interpeak latency of the wave I-V (I-V IPL) was considered to be prolonged in 2 cases with DSD compared to that of 4 without DSD. In 2 cases with DSD who had normal DCT findings, measurement of the I-V IPL was impossible due to abnormal pattern of the ABR wave. Above mentioned results suggests the presence of functional disturbance at the posterolateral portion of the pons in cases of brain bladder with DSD. (author)

  15. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  16. Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.

    Science.gov (United States)

    Joanisse, Marc F; DeSouza, Diedre D

    2014-01-01

    Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

  17. Analog and digital filtering of the brain stem auditory evoked response.

    Science.gov (United States)

    Kavanagh, K T; Franks, R

    1989-07-01

    This study compared the filtering effects on the auditory evoked potential of zero and standard phase shift digital filters (the former was a mathematical approximation of a standard Butterworth filter). Conventional filters were found to decrease the height of the evoked response in the majority of waveforms compared to zero phase shift filters. A 36-dB/octave zero phase shift high pass filter with a cutoff frequency of 100 Hz produced a 16% reduction in wave amplitude compared to the unfiltered control. A 36-dB/octave, 100-Hz standard phase shift high pass filter produced a 41% reduction, and a 12-dB/octave, 150-Hz standard phase shift high pass filter produced a 38% reduction in wave amplitude compared to the unfiltered control. A decrease in the mean along with an increase in the variability of wave IV/V latency was also noted with conventional compared to zero phase shift filters. The increase in the variability of the latency measurement was due to the difficulty in waveform identification caused by the phase shift distortion of the conventional filter along with the variable decrease in wave latency caused by phase shifting responses with different spectral content. Our results indicated that a zero phase shift high pass filter of 100 Hz was the most desirable filter studied for the mitigation of spontaneous brain activity and random muscle artifact.

  18. Study of tonotopic brain changes with functional MRI and FDG-PET in a patient with unilateral objective cochlear tinnitus.

    Science.gov (United States)

    Guinchard, A-C; Ghazaleh, Naghmeh; Saenz, M; Fornari, E; Prior, J O; Maeder, P; Adib, S; Maire, R

    2016-11-01

    We studied possible brain changes with functional MRI (fMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) in a patient with a rare, high-intensity "objective tinnitus" (high-level SOAEs) in the left ear of 10 years duration, with no associated hearing loss. This is the first case of objective cochlear tinnitus to be investigated with functional neuroimaging. The objective cochlear tinnitus was measured by Spontaneous Otoacoustic Emissions (SOAE) equipment (frequency 9689 Hz, intensity 57 dB SPL) and is clearly audible to anyone standing near the patient. Functional modifications in primary auditory areas and other brain regions were evaluated using 3T and 7T fMRI and FDG-PET. In the fMRI evaluations, a saturation of the auditory cortex at the tinnitus frequency was observed, but the global cortical tonotopic organization remained intact when compared to the results of fMRI of healthy subjects. The FDG-PET showed no evidence of an increase or decrease of activity in the auditory cortices or in the limbic system as compared to normal subjects. In this patient with high-intensity objective cochlear tinnitus, fMRI and FDG-PET showed no significant brain reorganization in auditory areas and/or in the limbic system, as reported in the literature in patients with chronic subjective tinnitus. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging.

    Science.gov (United States)

    Hugdahl, Kenneth

    2017-12-01

    In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional

  20. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2017-12-01

    Full Text Available In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular. Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses. The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in

  1. Multiple determinants of whole and regional brain volume among terrestrial carnivorans.

    Directory of Open Access Journals (Sweden)

    Eli M Swanson

    Full Text Available Mammalian brain volumes vary considerably, even after controlling for body size. Although several hypotheses have been proposed to explain this variation, most research in mammals on the evolution of encephalization has focused on primates, leaving the generality of these explanations uncertain. Furthermore, much research still addresses only one hypothesis at a time, despite the demonstrated importance of considering multiple factors simultaneously. We used phylogenetic comparative methods to investigate simultaneously the importance of several factors previously hypothesized to be important in neural evolution among mammalian carnivores, including social complexity, forelimb use, home range size, diet, life history, phylogeny, and recent evolutionary changes in body size. We also tested hypotheses suggesting roles for these variables in determining the relative volume of four brain regions measured using computed tomography. Our data suggest that, in contrast to brain size in primates, carnivoran brain size may lag behind body size over evolutionary time. Moreover, carnivore species that primarily consume vertebrates have the largest brains. Although we found no support for a role of social complexity in overall encephalization, relative cerebrum volume correlated positively with sociality. Finally, our results support negative relationships among different brain regions after accounting for overall endocranial volume, suggesting that increased size of one brain regions is often accompanied by reduced size in other regions rather than overall brain expansion.

  2. Congenital amusia persists in the developing brain after daily music listening.

    Directory of Open Access Journals (Sweden)

    Geneviève Mignault Goulet

    Full Text Available Congenital amusia is a neurodevelopmental disorder that affects about 3% of the adult population. Adults experiencing this musical disorder in the absence of macroscopically visible brain injury are described as cases of congenital amusia under the assumption that the musical deficits have been present from birth. Here, we show that this disorder can be expressed in the developing brain. We found that (10-13 year-old children exhibit a marked deficit in the detection of fine-grained pitch differences in both musical and acoustical context in comparison to their normally developing peers comparable in age and general intelligence. This behavioral deficit could be traced down to their abnormal P300 brain responses to the detection of subtle pitch changes. The altered pattern of electrical activity does not seem to arise from an anomalous functioning of the auditory cortex, because all early components of the brain potentials, the N100, the MMN, and the P200 appear normal. Rather, the brain and behavioral measures point to disrupted information propagation from the auditory cortex to other cortical regions. Furthermore, the behavioral and neural manifestations of the disorder remained unchanged after 4 weeks of daily musical listening. These results show that congenital amusia can be detected in childhood despite regular musical exposure and normal intellectual functioning.

  3. Regional brain distribution of toluene in rats and in a human autopsy

    Energy Technology Data Exchange (ETDEWEB)

    Ameno, Kiyoshi; Kiriu, Takahiro; Fuke, Chiaki; Ameno, Setsuko; Shinohara, Toyohiko; Ijiri, Iwao (Kagawa Medical School (Japan). Dept. of Forensic Medicine)

    1992-02-01

    Toluene concentrations in 9 brain regions of acutely exposed rats and that in 11 brain regions of a human case who inhaled toluene prior to death are described. After exposure to toluene by inhalation (2000 or 10 000 ppm) for 0.5 h or by oral dosing (400 mg/kg.), rats were killed by decapitation 0.5 and 4 h after onset of inhalation and 2 and 10 h after oral ingestion. After each experimental condition the highest range of brain region/blood toluene concentration ratio (BBCR) was in the brain stem regions (2.85-3.22) such as the pons and medulla oblongata, the middle range (1.77-2.12) in the midbrain, thalamus, caudate-putamen, hypothalamus and cerebellum, and the lowest range (1.22-1.64) in the hippocampus and cerebral cortex. These distribution patterns were quite constant. Toluene concentration in various brain regions were unevenly distributed and directly related blood levels. In a human case who had inhaled toluene vapor, the distribution among brain regions was relatively similar to that in rats, the highest concentration ratios being in the corpus callosum (BBCR:2.66) and the lowest in the hippocampus (BBCR:1.47). (orig.).

  4. AUTOMATED CLASSIFICATION AND SEGREGATION OF BRAIN MRI IMAGES INTO IMAGES CAPTURED WITH RESPECT TO VENTRICULAR REGION AND EYE-BALL REGION

    Directory of Open Access Journals (Sweden)

    C. Arunkumar

    2014-05-01

    Full Text Available Magnetic Resonance Imaging (MRI images of the brain are used for detection of various brain diseases including tumor. In such cases, classification of MRI images captured with respect to ventricular and eye ball regions helps in automated location and classification of such diseases. The methods employed in the paper can segregate the given MRI images of brain into images of brain captured with respect to ventricular region and images of brain captured with respect to eye ball region. First, the given MRI image of brain is segmented using Particle Swarm Optimization (PSO algorithm, which is an optimized algorithm for MRI image segmentation. The algorithm proposed in the paper is then applied on the segmented image. The algorithm detects whether the image consist of a ventricular region or an eye ball region and classifies it accordingly.

  5. Early access to lexical-level phonological representations of Mandarin word-forms : evidence from auditory N1 habituation

    NARCIS (Netherlands)

    Yue, Jinxing; Alter, Kai; Howard, David; Bastiaanse, Roelien

    2017-01-01

    An auditory habituation design was used to investigate whether lexical-level phonological representations in the brain can be rapidly accessed after the onset of a spoken word. We studied the N1 component of the auditory event-related electrical potential, and measured the amplitude decrements of N1

  6. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  7. An auditory multiclass brain-computer interface with natural stimuli: Usability evaluation with healthy participants and a motor impaired end user.

    Science.gov (United States)

    Simon, Nadine; Käthner, Ivo; Ruf, Carolin A; Pasqualotto, Emanuele; Kübler, Andrea; Halder, Sebastian

    2014-01-01

    Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.

  8. An auditory multiclass brain-computer interface with natural stimuli: usability evaluation with healthy participants and a motor impaired end user

    Directory of Open Access Journals (Sweden)

    Nadine eSimon

    2015-01-01

    Full Text Available Brain-computer interfaces (BCIs can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g. in the completely locked-in state or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on two consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min that increased to 90% (4.23 bits/min on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.

  9. Comparison of auditory and visual oddball fMRI in schizophrenia.

    Science.gov (United States)

    Collier, Azurii K; Wolf, Daniel H; Valdez, Jeffrey N; Turetsky, Bruce I; Elliott, Mark A; Gur, Raquel E; Gur, Ruben C

    2014-09-01

    Individuals with schizophrenia often suffer from attentional deficits, both in focusing on task-relevant targets and in inhibiting responses to distractors. Schizophrenia also has a differential impact on attention depending on modality: auditory or visual. However, it remains unclear how abnormal activation of attentional circuitry differs between auditory and visual modalities, as these two modalities have not been directly compared in the same individuals with schizophrenia. We utilized event-related functional magnetic resonance imaging (fMRI) to compare patterns of brain activation during an auditory and visual oddball task in order to identify modality-specific attentional impairment. Healthy controls (n=22) and patients with schizophrenia (n=20) completed auditory and visual oddball tasks in separate sessions. For responses to targets, the auditory modality yielded greater activation than the visual modality (A-V) in auditory cortex, insula, and parietal operculum, but visual activation was greater than auditory (V-A) in visual cortex. For responses to novels, A-V differences were found in auditory cortex, insula, and supramarginal gyrus; and V-A differences in the visual cortex, inferior temporal gyrus, and superior parietal lobule. Group differences in modality-specific activation were found only for novel stimuli; controls showed larger A-V differences than patients in prefrontal cortex and the putamen. Furthermore, for patients, greater severity of negative symptoms was associated with greater divergence of A-V novel activation in the visual cortex. Our results demonstrate that patients have more pronounced activation abnormalities in auditory compared to visual attention, and link modality specific abnormalities to negative symptom severity. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal.

    Science.gov (United States)

    Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Kulish, Vladimir V

    2016-08-30

    One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory.

  11. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    Science.gov (United States)

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  12. Same Genes, Different Brains: Neuroanatomical Differences Between Monozygotic Twins Discordant for Musical Training.

    Science.gov (United States)

    de Manzano, Örjan; Ullén, Fredrik

    2018-01-01

    Numerous cross-sectional and observational longitudinal studies show associations between expertise and regional brain anatomy. However, since these designs confound training with genetic predisposition, the causal role of training remains unclear. Here, we use a discordant monozygotic (identical) twin design to study expertise-dependent effects on neuroanatomy using musical training as model behavior, while essentially controlling for genetic factors and shared environment of upbringing. From a larger cohort of monozygotic twins, we were able to recruit 18 individuals (9 pairs) that were highly discordant for piano practice. We used structural and diffusion magnetic resonance imaging to analyze the auditory-motor network and within-pair differences in cortical thickness, cerebellar regional volumes and white-matter microstructure/fractional anisotropy. The analyses revealed that the musically active twins had greater cortical thickness in the auditory-motor network of the left hemisphere and more developed white matter microstructure in relevant tracts in both hemispheres and the corpus callosum. Furthermore, the volume of gray matter in the left cerebellar region of interest comprising lobules I-IV + V, was greater in the playing group. These findings provide the first clear support for that a significant portion of the differences in brain anatomy between experts and nonexperts depend on causal effects of training. © The Author 2017. Published by Oxford University Press.

  13. Mapping the brain's orchestration during speech comprehension: task-specific facilitation of regional synchrony in neural networks

    Directory of Open Access Journals (Sweden)

    Keil Andreas

    2004-10-01

    Full Text Available Abstract Background How does the brain convert sounds and phonemes into comprehensible speech? In the present magnetoencephalographic study we examined the hypothesis that the coherence of electromagnetic oscillatory activity within and across brain areas indicates neurophysiological processes linked to speech comprehension. Results Amplitude-modulated (sinusoidal 41.5 Hz auditory verbal and nonverbal stimuli served to drive steady-state oscillations in neural networks involved in speech comprehension. Stimuli were presented to 12 subjects in the following conditions (a an incomprehensible string of words, (b the same string of words after being introduced as a comprehensible sentence by proper articulation, and (c nonverbal stimulations that included a 600-Hz tone, a scale, and a melody. Coherence, defined as correlated activation of magnetic steady state fields across brain areas and measured as simultaneous activation of current dipoles in source space (Minimum-Norm-Estimates, increased within left- temporal-posterior areas when the sound string was perceived as a comprehensible sentence. Intra-hemispheric coherence was larger within the left than the right hemisphere for the sentence (condition (b relative to all other conditions, and tended to be larger within the right than the left hemisphere for nonverbal stimuli (condition (c, tone and melody relative to the other conditions, leading to a more pronounced hemispheric asymmetry for nonverbal than verbal material. Conclusions We conclude that coherent neuronal network activity may index encoding of verbal information on the sentence level and can be used as a tool to investigate auditory speech comprehension.

  14. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation

    Directory of Open Access Journals (Sweden)

    Shuang Xia

    2017-01-01

    Full Text Available Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF and regional homogeneity (ReHo of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45 was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  15. Central region morphometry in a child brain; Age and gender ...

    African Journals Online (AJOL)

    2013-10-10

    Oct 10, 2013 ... Background: Data on central region morphometry of a child brain is important not only in terms of ... brain volume reaches the peak at the age of 14.5 in men ..... child and adolescent brain and effects of genetic variation.

  16. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Maria eHerrojo Ruiz

    2014-09-01

    Full Text Available Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback.As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS.Overall, the present investigations are the first to demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN

  17. Functional Connectivity of Multiple Brain Regions Required for the Consolidation of Social Recognition Memory.

    Science.gov (United States)

    Tanimizu, Toshiyuki; Kenney, Justin W; Okano, Emiko; Kadoma, Kazune; Frankland, Paul W; Kida, Satoshi

    2017-04-12

    Social recognition memory is an essential and basic component of social behavior that is used to discriminate familiar and novel animals/humans. Previous studies have shown the importance of several brain regions for social recognition memories; however, the mechanisms underlying the consolidation of social recognition memory at the molecular and anatomic levels remain unknown. Here, we show a brain network necessary for the generation of social recognition memory in mice. A mouse genetic study showed that cAMP-responsive element-binding protein (CREB)-mediated transcription is required for the formation of social recognition memory. Importantly, significant inductions of the CREB target immediate-early genes c-fos and Arc were observed in the hippocampus (CA1 and CA3 regions), medial prefrontal cortex (mPFC), anterior cingulate cortex (ACC), and amygdala (basolateral region) when social recognition memory was generated. Pharmacological experiments using a microinfusion of the protein synthesis inhibitor anisomycin showed that protein synthesis in these brain regions is required for the consolidation of social recognition memory. These findings suggested that social recognition memory is consolidated through the activation of CREB-mediated gene expression in the hippocampus/mPFC/ACC/amygdala. Network analyses suggested that these four brain regions show functional connectivity with other brain regions and, more importantly, that the hippocampus functions as a hub to integrate brain networks and generate social recognition memory, whereas the ACC and amygdala are important for coordinating brain activity when social interaction is initiated by connecting with other brain regions. We have found that a brain network composed of the hippocampus/mPFC/ACC/amygdala is required for the consolidation of social recognition memory. SIGNIFICANCE STATEMENT Here, we identify brain networks composed of multiple brain regions for the consolidation of social recognition memory. We

  18. Auditory and cognitive performance in elderly musicians and nonmusicians.

    Directory of Open Access Journals (Sweden)

    Massimo Grassi

    Full Text Available Musicians represent a model for examining brain and behavioral plasticity in terms of cognitive and auditory profile, but few studies have investigated whether elderly musicians have better auditory and cognitive abilities than nonmusicians. The aim of the present study was to examine whether being a professional musician attenuates the normal age-related changes in hearing and cognition. Elderly musicians still active in their profession were compared with nonmusicians on auditory performance (absolute threshold, frequency intensity, duration and spectral shape discrimination, gap and sinusoidal amplitude-modulation detection, and on simple (short-term memory and more complex and higher-order (working memory [WM] and visuospatial abilities cognitive tasks. The sample consisted of adults at least 65 years of age. The results showed that older musicians had similar absolute thresholds but better supra-threshold discrimination abilities than nonmusicians in four of the six auditory tasks administered. They also had a better WM performance, and stronger visuospatial abilities than nonmusicians. No differences were found between the two groups' short-term memory. Frequency discrimination and gap detection for the auditory measures, and WM complex span tasks and one of the visuospatial tasks for the cognitive ones proved to be very good classifiers of the musicians. These findings suggest that life-long music training may be associated with enhanced auditory and cognitive performance, including complex cognitive skills, in advanced age. However, whether this music training represents a protective factor or not needs further investigation.

  19. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  20. Preattentive extraction of abstract feature conjunctions from auditory stimulation as reflected by the mismatch negativity (MMN).

    Science.gov (United States)

    Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I

    2001-03-01

    Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.

  1. Mapping of functional activity in brain with 18F-fluoro-deoxyglucose

    International Nuclear Information System (INIS)

    Alavi, A.; Reivich, M.; Greenberg, J.

    1981-01-01

    The efficacy of using the 18 F-fluoro-deoxyglucose ( 18 F-DG) for measuring regional cerebral glucose utilization in man during functional activation is demonstrated. Normal male volunteers subjected to sensory stimuli (visual, auditory, tactile) exhibited focal increases in glucose metabolism in response to the stimulus. Unilateral visual hemifield stimulation caused the contralateral striate cortex to become more active metabolically than the striate cortex ipsilateral to the stimulated hemifield. Similarly, stroking of the fingers and hand of one arm with a brush produced an increase in metabolism in the contralateral postcentral gyrus compared to the homologous ipsilateral region. The auditory stimulus, which consisted of monaural listening to either a meaningful or nonmeaningful story, caused an increase in glucose metabolism in the right temporal cortex independent of which ear was stimulated. These results demonstrate that the 18 F-DG technique is capable of providing functional maps in vivo in the human brain

  2. Pitch-Responsive Cortical Regions in Congenital Amusia.

    Science.gov (United States)

    Norman-Haignere, Sam V; Albouy, Philippe; Caclin, Anne; McDermott, Josh H; Kanwisher, Nancy G; Tillmann, Barbara

    2016-03-09

    Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work. Copyright © 2016 the authors 0270-6474/16/362986-09$15.00/0.

  3. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  4. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Rapid effects of hearing song on catecholaminergic activity in the songbird auditory pathway.

    Directory of Open Access Journals (Sweden)

    Lisa L Matragrano

    Full Text Available Catecholaminergic (CA neurons innervate sensory areas and affect the processing of sensory signals. For example, in birds, CA fibers innervate the auditory pathway at each level, including the midbrain, thalamus, and forebrain. We have shown previously that in female European starlings, CA activity in the auditory forebrain can be enhanced by exposure to attractive male song for one week. It is not known, however, whether hearing song can initiate that activity more rapidly. Here, we exposed estrogen-primed, female white-throated sparrows to conspecific male song and looked for evidence of rapid synthesis of catecholamines in auditory areas. In one hemisphere of the brain, we used immunohistochemistry to detect the phosphorylation of tyrosine hydroxylase (TH, a rate-limiting enzyme in the CA synthetic pathway. We found that immunoreactivity for TH phosphorylated at serine 40 increased dramatically in the auditory forebrain, but not the auditory thalamus and midbrain, after 15 min of song exposure. In the other hemisphere, we used high pressure liquid chromatography to measure catecholamines and their metabolites. We found that two dopamine metabolites, dihydroxyphenylacetic acid and homovanillic acid, increased in the auditory forebrain but not the auditory midbrain after 30 min of exposure to conspecific song. Our results are consistent with the hypothesis that exposure to a behaviorally relevant auditory stimulus rapidly induces CA activity, which may play a role in auditory responses.

  6. A Multimodal Approach for Determining Brain Networks by Jointly Modeling Functional and Structural Connectivity

    Directory of Open Access Journals (Sweden)

    Wenqiong eXue

    2015-02-01

    Full Text Available Recent innovations in neuroimaging technology have provided opportunities for researchers to investigate connectivity in the human brain by examining the anatomical circuitry as well as functional relationships between brain regions. Existing statistical approaches for connectivity generally examine resting-state or task-related functional connectivity (FC between brain regions or separately examine structural linkages. As a means to determine brain networks, we present a unified Bayesian framework for analyzing FC utilizing the knowledge of associated structural connections, which extends an approach by Patel et al.(2006a that considers only functional data. We introduce an FC measure that rests upon assessments of functional coherence between regional brain activity identified from functional magnetic resonance imaging (fMRI data. Our structural connectivity (SC information is drawn from diffusion tensor imaging (DTI data, which is used to quantify probabilities of SC between brain regions. We formulate a prior distribution for FC that depends upon the probability of SC between brain regions, with this dependence adhering to structural-functional links revealed by our fMRI and DTI data. We further characterize the functional hierarchy of functionally connected brain regions by defining an ascendancy measure that compares the marginal probabilities of elevated activity between regions. In addition, we describe topological properties of the network, which is composed of connected region pairs, by performing graph theoretic analyses. We demonstrate the use of our Bayesian model using fMRI and DTI data from a study of auditory processing. We further illustrate the advantages of our method by comparisons to methods that only incorporate functional information.

  7. Behavioral lifetime of human auditory sensory memory predicted by physiological measures.

    Science.gov (United States)

    Lu, Z L; Williamson, S J; Kaufman, L

    1992-12-04

    Noninvasive magnetoencephalography makes it possible to identify the cortical area in the human brain whose activity reflects the decay of passive sensory storage of information about auditory stimuli (echoic memory). The lifetime for decay of the neuronal activation trace in primary auditory cortex was found to predict the psychophysically determined duration of memory for the loudness of a tone. Although memory for the loudness of a specific tone is lost, the remembered loudness decays toward the global mean of all of the loudnesses to which a subject is exposed in a series of trials.

  8. Modeling the Developmental Patterns of Auditory Evoked Magnetic Fields in Children

    OpenAIRE

    Kotecha, Rupesh; Pardos, Maria; Wang, Yingying; Wu, Ting; Horn, Paul; Brown, David; Rose, Douglas; deGrauw, Ton; Xiang, Jing

    2009-01-01

    BACKGROUND: As magnetoencephalography (MEG) is of increasing utility in the assessment of deficits and development delays in brain disorders in pediatrics, it becomes imperative to fully understand the functional development of the brain in children. METHODOLOGY: The present study was designed to characterize the developmental patterns of auditory evoked magnetic responses with respect to age and gender. Sixty children and twenty adults were studied with a 275-channel MEG system. CONCLUSIONS:...

  9. Brain functional connectivity during the experience of thought blocks in schizophrenic patients with persistent auditory verbal hallucinations: an EEG study.

    Science.gov (United States)

    Angelopoulos, Elias; Koutsoukos, Elias; Maillis, Antonis; Papadimitriou, George N; Stefanis, Costas

    2014-03-01

    Thought blocks (TBs) are characterized by regular interruptions in the stream of thought. Outward signs are abrupt and repeated interruptions in the flow of conversation or actions while subjective experience is that of a total and uncontrollable emptying of the mind. In the very limited bibliography regarding TB, the phenomenon is thought to be conceptualized as a disturbance of consciousness that can be attributed to stoppages of continuous information processing due to an increase in the volume of information to be processed. In an attempt to investigate potential expression of the phenomenon on the functional properties of electroencephalographic (EEG) activity, an EEG study was contacted in schizophrenic patients with persisting auditory verbal hallucinations (AVHs) who additionally exhibited TBs. In this case, we hypothesized that the persistent and dense AVHs could serve the role of an increased information flow that the brain is unable to process, a condition that is perceived by the person as TB. Phase synchronization analyses performed on EEG segments during the experience of TBs showed that synchrony values exhibited a long-range common mode of coupling (grouped behavior) among the left temporal area and the remaining central and frontal brain areas. These common synchrony-fluctuation schemes were observed for 0.5 to 2s and were detected in a 4-s window following the estimated initiation of the phenomenon. The observation was frequency specific and detected in the broad alpha band region (6-12Hz). The introduction of synchrony entropy (SE) analysis applied on the cumulative synchrony distribution showed that TB states were characterized by an explicit preference of the system to be functioned at low values of synchrony, while the synchrony values are broadly distributed during the recovery state. Our results indicate that during TB states, the phase locking of several brain areas were converged uniformly in a narrow band of low synchrony values and in a

  10. Opposite patterns of hemisphere dominance for early auditory processing of lexical tones and consonants

    OpenAIRE

    Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin

    2006-01-01

    in tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually ...

  11. Acute treatment with fluvoxamine elevates rat brain serotonin synthesis in some terminal regions: An autoradiographic study

    International Nuclear Information System (INIS)

    Muck-Seler, Dorotea; Pivac, Nela; Diksic, Mirko

    2012-01-01

    Introduction: A considerable body of evidence indicates the involvement of the neurotransmitter serotonin (5-HT) in the pathogenesis and treatment of depression. Methods: The acute effect of fluvoxamine, on 5-HT synthesis rates was investigated in rat brain regions, using α- 14 C-methyl-L-tryptophan as a tracer. Fluvoxamine (25 mg/kg) and saline (control) were injected intraperitoneally, one hour before the injection of the tracer (30 μCi). Results: There was no significant effect of fluvoxamine on plasma free tryptophan. After Benjamini–Hochberg False Discovery Rate correction, a significant decrease in the 5-HT synthesis rate in the fluvoxamine treated rats, was found in the raphe magnus (− 32%), but not in the median (− 14%) and dorsal (− 3%) raphe nuclei. In the regions with serotonergic axon terminals, significant increases in synthesis rates were observed in the dorsal (+ 41%) and ventral (+ 43%) hippocampus, visual (+ 38%), auditory (+ 65%) and parietal (+ 37%) cortex, and the substantia nigra pars compacta (+ 56%). There were no significant changes in the 5-HT synthesis rates in the median (+ 11%) and lateral (+ 24%) part of the caudate-putamen, nucleus accumbens (+ 5%), VTA (+ 16%) or frontal cortex (+ 6%). Conclusions: The data show that the acute administration of fluvoxamine affects 5-HT synthesis rates in a regionally specific pattern, with a general elevation of the synthesis in the terminal regions and a reduction in some cell body structures. The reasons for the regional specific effect of fluvoxamine on 5-HT synthesis are unclear, but may be mediated by the presynaptic serotonergic autoreceptors.

  12. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Formisano, E.; Pepino, A.; Bracale, M.; Di Salle, F.; Lanfermann, H.; Zanella, F.E.

    1998-01-01

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  13. Coding space-time stimulus dynamics in auditory brain maps

    Directory of Open Access Journals (Sweden)

    Yunyan eWang

    2014-04-01

    Full Text Available Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl’s midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior.

  14. A basic study on universal design of auditory signals in automobiles.

    Science.gov (United States)

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  15. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    Science.gov (United States)

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  16. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  17. The auditory scene: an fMRI study on melody and accompaniment in professional pianists.

    Science.gov (United States)

    Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela

    2014-11-15

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Partially Overlapping Brain Networks for Singing and Cello Playing

    Directory of Open Access Journals (Sweden)

    Melanie Segado

    2018-05-01

    Full Text Available This research uses an MR-Compatible cello to compare functional brain activation during singing and cello playing within the same individuals to determine the extent to which arbitrary auditory-motor associations, like those required to play the cello, co-opt functional brain networks that evolved for singing. Musical instrument playing and singing both require highly specific associations between sounds and movements. Because these are both used to produce musical sounds, it is often assumed in the literature that their neural underpinnings are highly similar. However, singing is an evolutionarily old human trait, and the auditory-motor associations used for singing are also used for speech and non-speech vocalizations. This sets it apart from the arbitrary auditory-motor associations required to play musical instruments. The pitch range of the cello is similar to that of the human voice, but cello playing is completely independent of the vocal apparatus, and can therefore be used to dissociate the auditory-vocal network from that of the auditory-motor network. While in the MR-Scanner, 11 expert cellists listened to and subsequently produced individual tones either by singing or cello playing. All participants were able to sing and play the target tones in tune (<50C deviation from target. We found that brain activity during cello playing directly overlaps with brain activity during singing in many areas within the auditory-vocal network. These include primary motor, dorsal pre-motor, and supplementary motor cortices (M1, dPMC, SMA,the primary and periprimary auditory cortices within the superior temporal gyrus (STG including Heschl's gyrus, anterior insula (aINS, anterior cingulate cortex (ACC, and intraparietal sulcus (IPS, and Cerebellum but, notably, exclude the periaqueductal gray (PAG and basal ganglia (Putamen. Second, we found that activity within the overlapping areas is positively correlated with, and therefore likely contributing to

  19. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  20. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  1. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Abnormal synchrony and effective connectivity in patients with schizophrenia and auditory hallucinations

    Science.gov (United States)

    de la Iglesia-Vaya, Maria; Escartí, Maria José; Molina-Mateo, Jose; Martí-Bonmatí, Luis; Gadea, Marien; Castellanos, Francisco Xavier; Aguilar García-Iturrospe, Eduardo J.; Robles, Montserrat; Biswal, Bharat B.; Sanjuan, Julio

    2014-01-01

    Auditory hallucinations (AH) are the most frequent positive symptoms in patients with schizophrenia. Hallucinations have been related to emotional processing disturbances, altered functional connectivity and effective connectivity deficits. Previously, we observed that, compared to healthy controls, the limbic network responses of patients with auditory hallucinations differed when the subjects were listening to emotionally charged words. We aimed to compare the synchrony patterns and effective connectivity of task-related networks between schizophrenia patients with and without AH and healthy controls. Schizophrenia patients with AH (n = 27) and without AH (n = 14) were compared with healthy participants (n = 31). We examined functional connectivity by analyzing correlations and cross-correlations among previously detected independent component analysis time courses. Granger causality was used to infer the information flow direction in the brain regions. The results demonstrate that the patterns of cortico-cortical functional synchrony differentiated the patients with AH from the patients without AH and from the healthy participants. Additionally, Granger-causal relationships between the networks clearly differentiated the groups. In the patients with AH, the principal causal source was an occipital–cerebellar component, versus a temporal component in the patients without AH and the healthy controls. These data indicate that an anomalous process of neural connectivity exists when patients with AH process emotional auditory stimuli. Additionally, a central role is suggested for the cerebellum in processing emotional stimuli in patients with persistent AH. PMID:25379429

  3. Theta-alpha EEG phase distributions in the frontal area for dissociation of visual and auditory working memory.

    Science.gov (United States)

    Akiyama, Masakazu; Tero, Atsushi; Kawasaki, Masahiro; Nishiura, Yasumasa; Yamaguchi, Yoko

    2017-03-07

    Working memory (WM) is known to be associated with synchronization of the theta and alpha bands observed in electroencephalograms (EEGs). Although frontal-posterior global theta synchronization appears in modality-specific WM, local theta synchronization in frontal regions has been found in modality-independent WM. How frontal theta oscillations separately synchronize with task-relevant sensory brain areas remains an open question. Here, we focused on theta-alpha phase relationships in frontal areas using EEG, and then verified their functional roles with mathematical models. EEG data showed that the relationship between theta (6 Hz) and alpha (12 Hz) phases in the frontal areas was about 1:2 during both auditory and visual WM, and that the phase distributions between auditory and visual WM were different. Next, we used the differences in phase distributions to construct FitzHugh-Nagumo type mathematical models. The results replicated the modality-specific branching by orthogonally of the trigonometric functions for theta and alpha oscillations. Furthermore, mathematical and experimental results were consistent with regards to the phase relationships and amplitudes observed in frontal and sensory areas. These results indicate the important role that different phase distributions of theta and alpha oscillations have in modality-specific dissociation in the brain.

  4. From Vivaldi to Beatles and back: predicting lateralized brain responses to music.

    Science.gov (United States)

    Alluri, Vinoo; Toiviainen, Petri; Lund, Torben E; Wallentin, Mikkel; Vuust, Peter; Nandi, Asoke K; Ristaniemi, Tapani; Brattico, Elvira

    2013-12-01

    We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce

  5. Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans.

    Science.gov (United States)

    Marks, Kendra L; Martel, David T; Wu, Calvin; Basura, Gregory J; Roberts, Larry E; Schvartz-Leyzac, Kara C; Shore, Susan E

    2018-01-03

    The dorsal cochlear nucleus is the first site of multisensory convergence in mammalian auditory pathways. Principal output neurons, the fusiform cells, integrate auditory nerve inputs from the cochlea with somatosensory inputs from the head and neck. In previous work, we developed a guinea pig model of tinnitus induced by noise exposure and showed that the fusiform cells in these animals exhibited increased spontaneous activity and cross-unit synchrony, which are physiological correlates of tinnitus. We delivered repeated bimodal auditory-somatosensory stimulation to the dorsal cochlear nucleus of guinea pigs with tinnitus, choosing a stimulus interval known to induce long-term depression (LTD). Twenty minutes per day of LTD-inducing bimodal (but not unimodal) stimulation reduced physiological and behavioral evidence of tinnitus in the guinea pigs after 25 days. Next, we applied the same bimodal treatment to 20 human subjects with tinnitus using a double-blinded, sham-controlled, crossover study. Twenty-eight days of LTD-inducing bimodal stimulation reduced tinnitus loudness and intrusiveness. Unimodal auditory stimulation did not deliver either benefit. Bimodal auditory-somatosensory stimulation that induces LTD in the dorsal cochlear nucleus may hold promise for suppressing chronic tinnitus, which reduces quality of life for millions of tinnitus sufferers worldwide. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  6. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  7. Carnosine: effect on aging-induced increase in brain regional monoamine oxidase-A activity.

    Science.gov (United States)

    Banerjee, Soumyabrata; Poddar, Mrinal K

    2015-03-01

    Aging is a natural biological process associated with several neurological disorders along with the biochemical changes in brain. Aim of the present investigation is to study the effect of carnosine (0.5-2.5μg/kg/day, i.t. for 21 consecutive days) on aging-induced changes in brain regional (cerebral cortex, hippocampus, hypothalamus and pons-medulla) mitochondrial monoamine oxidase-A (MAO-A) activity with its kinetic parameters. The results of the present study are: (1) The brain regional mitochondrial MAO-A activity and their kinetic parameters (except in Km of pons-medulla) were significantly increased with the increase of age (4-24 months), (2) Aging-induced increase of brain regional MAO-A activity including its Vmax were attenuated with higher dosages of carnosine (1.0-2.5μg/kg/day) and restored toward the activity that observed in young, though its lower dosage (0.5μg/kg/day) were ineffective in these brain regional MAO-A activity, (3) Carnosine at higher dosage in young rats, unlike aged rats significantly inhibited all the brain regional MAO-A activity by reducing their only Vmax excepting cerebral cortex, where Km was also significantly enhanced. These results suggest that carnosine attenuated the aging-induced increase of brain regional MAO-A activity by attenuating its kinetic parameters and restored toward the results of MAO-A activity that observed in corresponding brain regions of young rats. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  8. Antioxidant mediated response of Scoparia dulcis in noise-induced redox imbalance and immunohistochemical changes in rat brain.

    Science.gov (United States)

    Wankhar, Wankupar; Srinivasan, Sakthivel; Rajan, Ravindran; Sheeladevi, Rathinasamy

    2017-01-19

    Noise has been regarded as an environmental/occupational stressor that causes damages to both auditory and non-auditory organs. Prolonged exposure to these mediators of stress has often resulted in detrimental effect, where oxidative/nitrosative stress plays a major role. Hence, it would be appropriate to examine the possible role of free radicals in brain discrete regions and the "antioxidants" mediated response of S. dulcis. Animals were subjected to noise stress for 15 days (100 dB/4 hours/day) and estimation of endogenous free radical and antioxidant activity were carried out on brain discrete regions (the cerebral cortex, cerebellum, brainstem, striatum, hippocampus and hypothalamus). The result showed that exposure to noise could alleviate endogenous free radical generation and altered antioxidant status in brain discrete regions when compared to that of the control groups. This alleviated free radical generation (H 2 O 2 and NO) is well supported by an upregulated protein expression on immunohistochemistry of both iNOS and nNOS in the cerebral cortex on exposure to noise stress. These findings suggest that increased free radical generation and altered anti-oxidative status can cause redox imbalance in the brain discrete regions. However, free radical scavenging activity of the plant was evident as the noise exposed group treated with S. dulcis[200 mg/(kg·b·w)] displayed a therapeutic effect by decreasing the free radical level and regulate the anti-oxidative status to that of control animals. Hence, it can be concluded that the efficacy of S. dulcis could be attributed to its free radical scavenging activity and anti-oxidative property.

  9. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  10. Decoding the auditory brain with canonical component analysis

    DEFF Research Database (Denmark)

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M

    2018-01-01

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP...... higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response....

  11. Segmentation of brain parenchymal regions into gray matter and white matter with Alzheimer's disease

    International Nuclear Information System (INIS)

    Tokunaga, Chiaki; Yoshiura, Takashi; Yamashita, Yasuo; Magome, Taiki; Honda, Hiroshi; Arimura, Hidetaka; Toyofuku, Fukai; Ohki, Masafumi

    2010-01-01

    It is very difficult and time consuming for neuroradiologists to estimate the degree of cerebral atrophy based on the volume of cortical regions etc. Our purpose of this study was to develop an automated segmentation of the brain parenchyma into gray and white matter regions with Alzheimer's disease (AD) in three-dimensional (3D) T1-weighted MR images. Our proposed method consisted of extraction of a brain parenchymal region based on a brain model matching and segmentation of the brain parenchyma into gray and white matter regions based on a fuzzy c-means (FCM) algorithm. We applied our proposed method to MR images of the whole brains obtained from 9 cases, including 4 clinically AD cases and 5 control cases. The mean volume percentage of a cortical region (41.7%) to a brain parenchymal region in AD patients was smaller than that (45.2%) in the control subjects (p=0.000462). (author)

  12. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  13. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  14. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  15. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  16. Salicylate-Induced Auditory Perceptual Disorders and Plastic Changes in Nonclassical Auditory Centers in Rats

    Directory of Open Access Journals (Sweden)

    Guang-Di Chen

    2014-01-01

    Full Text Available Previous studies have shown that sodium salicylate (SS activates not only central auditory structures, but also nonauditory regions associated with emotion and memory. To identify electrophysiological changes in the nonauditory regions, we recorded sound-evoked local field potentials and multiunit discharges from the striatum, amygdala, hippocampus, and cingulate cortex after SS-treatment. The SS-treatment produced behavioral evidence of tinnitus and hyperacusis. Physiologically, the treatment significantly enhanced sound-evoked neural activity in the striatum, amygdala, and hippocampus, but not in the cingulate. The enhanced sound evoked response could be linked to the hyperacusis-like behavior. Further analysis showed that the enhancement of sound-evoked activity occurred predominantly at the midfrequencies, likely reflecting shifts of neurons towards the midfrequency range after SS-treatment as observed in our previous studies in the auditory cortex and amygdala. The increased number of midfrequency neurons would lead to a relative higher number of total spontaneous discharges in the midfrequency region, even though the mean discharge rate of each neuron may not increase. The tonotopical overactivity in the midfrequency region in quiet may potentially lead to tonal sensation of midfrequency (the tinnitus. The neural changes in the amygdala and hippocampus may also contribute to the negative effect that patients associate with their tinnitus.

  17. The role of the temporal pole in modulating primitive auditory memory.

    Science.gov (United States)

    Liu, Zhiliang; Wang, Qian; You, Yu; Yin, Peng; Ding, Hu; Bao, Xiaohan; Yang, Pengcheng; Lu, Hao; Gao, Yayue; Li, Liang

    2016-04-21

    Primitive auditory memory (PAM), which is recognized as the early point in the chain of the transient auditory memory system, faithfully maintains raw acoustic fine-structure signals for up to 20-30 milliseconds. The neural mechanisms underlying PAM have not been reported in the literature. Previous anatomical, brain-imaging, and neurophysiological studies have suggested that the temporal pole (TP), part of the parahippocampal region in the transitional area between perirhinal cortex and superior/inferior temporal gyri, is involved in auditory memories. This study investigated whether the TP plays a role in mediating/modulating PAM. The longest interaural interval (the interaural-delay threshold) for detecting a break in interaural correlation (BIC) embedded in interaurally correlated wideband noises was used to indicate the temporal preservation of PAM and examined in both healthy listeners and patients receiving unilateral anterior temporal lobectomy (ATL, centered on the TP) for treating their temporal lobe epilepsy (TLE). The results showed that patients with ATL were still able to detect the BIC even when an interaural interval was introduced, regardless of which ear was the leading one. However, in patient participants, the group-mean interaural-delay threshold for detecting the BIC under the contralateral-ear-leading (relative to the side of ATL) condition was significantly shorter than that under the ipsilateral-ear-leading condition. The results suggest that although the TP is not essential for integrating binaural signals and mediating the PAM, it plays a role in top-down modulating the PAM of raw acoustic fine-structure signals from the contralateral ear. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Auditory evoked potential measurements in elasmobranchs

    Science.gov (United States)

    Casper, Brandon; Mann, David

    2005-04-01

    Auditory evoked potentials (AEP) were first used to examine hearing in elasmobranchs by Corwin and Bullock in the late 1970s and early 1980s, marking the first time AEPs had been measured in fishes. Results of these experiments identified the regions of the ear and brain in which sound is processed, though no actual hearing thresholds were measured. Those initial experiments provided the ground work for future AEP experiments to measure fish hearing abilities in a manner that is much faster and more convenient than classical conditioning. Data will be presented on recent experiments in which AEPs were used to measure the hearing thresholds of two species of elasmobranchs: the nurse shark, Ginglymostoma cirratum, and the yellow stingray, Urobatis jamaicencis. Audiograms were analyzed and compared to previously published audiograms obtained using classical conditioning with results indicating that hearing thresholds were similar for the two methods. These data suggest that AEP testing is a viable option when measuring hearing in elasmobranchs and can increase the speed in which future hearing measurements can be obtained.

  19. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  20. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Formisano, E; Pepino, A; Bracale, M [Department of Electronic Engineering, Biomedical Unit, Universita di Napoli, Federic II, Italy, Via Claudio 21, 80125 Napoli (Italy); Di Salle, F [Department of Biomorphological and Functional Sciences, Radiologucal Unit, Universita di Napoli, Federic II, Italy, Via Claudio 21, 80125 Napoli (Italy); Lanfermann, H; Zanella, F E [Department of Neuroradiology, J.W. Goethe Universitat, Frankfurt/M. (Germany)

    1999-12-31

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors) 17 refs., 4 figs.

  1. Auditory and visual interhemispheric communication in musicians and non-musicians.

    Directory of Open Access Journals (Sweden)

    Rebecca Woelfle

    Full Text Available The corpus callosum (CC is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time, or to the contralateral hemisphere (crossed reaction time. Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.

  2. Auditory and visual interhemispheric communication in musicians and non-musicians.

    Science.gov (United States)

    Woelfle, Rebecca; Grahn, Jessica A

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.

  3. Sex differences in the refractory period of the 100 ms auditory evoked magnetic field.

    Science.gov (United States)

    Rojas, D C; Teale, P; Sheeder, J; Reite, M

    1999-11-08

    The 100 ms latency auditory evoked magnetic response (M100) has been implicated in the earliest stage of acoustic memory encoding in the brain. Sex differences in this response have been found in its location within the brain and its functional properties. We recorded the M100 in 25 adults in response to changes in interstimulus interval of an auditory stimulus. Response amplitudes of the M100 were used to compute a measure of the M100 refractory period, which has been proposed to index the decay time constant of echoic memory. This time constant was significantly longer in both hemispheres of the female participants when compared to the male participants. Possible implications of this for behavioral sex differences in human memory performance are discussed.

  4. Noninvasive fMRI investigation of interaural level difference processing in the rat auditory subcortex.

    Directory of Open Access Journals (Sweden)

    Condon Lau

    Full Text Available OBJECTIVE: Interaural level difference (ILD is the difference in sound pressure level (SPL between the two ears and is one of the key physical cues used by the auditory system in sound localization. Our current understanding of ILD encoding has come primarily from invasive studies of individual structures, which have implicated subcortical structures such as the cochlear nucleus (CN, superior olivary complex (SOC, lateral lemniscus (LL, and inferior colliculus (IC. Noninvasive brain imaging enables studying ILD processing in multiple structures simultaneously. METHODS: In this study, blood oxygenation level-dependent (BOLD functional magnetic resonance imaging (fMRI is used for the first time to measure changes in the hemodynamic responses in the adult Sprague-Dawley rat subcortex during binaural stimulation with different ILDs. RESULTS AND SIGNIFICANCE: Consistent responses are observed in the CN, SOC, LL, and IC in both hemispheres. Voxel-by-voxel analysis of the change of the response amplitude with ILD indicates statistically significant ILD dependence in dorsal LL, IC, and a region containing parts of the SOC and LL. For all three regions, the larger amplitude response is located in the hemisphere contralateral from the higher SPL stimulus. These findings are supported by region of interest analysis. fMRI shows that ILD dependence occurs in both hemispheres and multiple subcortical levels of the auditory system. This study is the first step towards future studies examining subcortical binaural processing and sound localization in animal models of hearing.

  5. The Neurophysiology of Auditory Hallucinations – A Historic and Contemporary Review

    Directory of Open Access Journals (Sweden)

    Remko evan Lutterveld

    2011-05-01

    Full Text Available Electroencephalography (EEG and magnetoencephalography (MEG are two techniques that distinguish themselves from other neuroimaging methodologies through their ability to directly measure brain-related activity and their high temporal resolution. A large body of research has applied these techniques to study auditory hallucinations. Across a variety of approaches, the left superior temporal cortex is consistently reported to be involved in this symptom. Moreover, there is increasing evidence that a failure in corollary discharge, i.e. a neural signal originating in frontal speech areas that indicates to sensory areas that forthcoming thought is self-generated, may underlie the experience of auditory hallucinations

  6. Regional cerebral blood flow in psychiatry: The resting and activated brains of schizophrenic patients

    International Nuclear Information System (INIS)

    Gur, R.E.

    1984-01-01

    The investigation of regional brain functioning in schizophrenia has been based on behavioral techniques. Although results are sometimes inconsistent, the behavioral observations suggest left hemispheric dysfunction and left hemispheric overreaction. Recent developments in neuroimaging technology make possible major refinements in assessing regional brain function. Both anatomical and physiological information now be used to study regional brain development in psychiatric disorders. This chapter describes the application of one method - the xenon-133 technique for measuring regional cerebral blood flow (rCBF) - in studying the resting and activated brains of schizoprenic patients

  7. Auditory training and challenges associated with participation and compliance.

    Science.gov (United States)

    Sweetow, Robert W; Sabes, Jennifer Henderson

    2010-10-01

    When individuals have hearing loss, physiological changes in their brain interact with relearning of sound patterns. Some individuals utilize compensatory strategies that may result in successful hearing aid use. Others, however, are not so fortunate. Modern hearing aids can provide audibility but may not rectify spectral and temporal resolution, susceptibility to noise interference, or degradation of cognitive skills, such as declining auditory memory and slower speed of processing associated with aging. Frequently, these deficits are not identified during a typical "hearing aid evaluation." Aural rehabilitation has long been advocated to enhance communication but has not been considered time or cost-effective. Home-based, interactive adaptive computer therapy programs are available that are designed to engage the adult hearing-impaired listener in the hearing aid fitting process, provide listening strategies, build confidence, and address cognitive changes. Despite the availability of these programs, many patients and professionals are reluctant to engage in and complete therapy. The purposes of this article are to discuss the need for identifying auditory and nonauditory factors that may adversely affect the overall audiological rehabilitation process, to discuss important features that should be incorporated into training, and to examine reasons for the lack of compliance with therapeutic options. Possible solutions to maximizing compliance are explored. Only a small portion of audiologists (fewer than 10%) offer auditory training to patients with hearing impairment, even though auditory training appears to lower the rate of hearing aid returns for credit. Patients to whom auditory training programs are recommended often do not complete the training, however. Compliance for a cohort of home-based auditory therapy trainees was less than 30%. Activities to increase patient compliance to auditory training protocols are proposed. American Academy of Audiology.

  8. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    Science.gov (United States)

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  9. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  10. Data mining a functional neuroimaging database for functional segregation in brain regions

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup; Balslev, Daniela; Hansen, Lars Kai

    2006-01-01

    We describe a specialized neuroinformatic data mining technique in connection with a meta-analytic functional neuroimaging database: We mine for functional segregation within brain regions by identifying journal articles that report brain activations within the regions and clustering the abstract...

  11. Data mining a functional neuroimaging database for functional|segregation in brain regions

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup

    2006-01-01

    We describe a specialized neuroinformatic data mining technique in connection with a meta-analytic functional neuroimaging database: We mine for functional segregation within brain regions by identifying journal articles that report brain activations within the regions and clustering the abstract...

  12. Abnormal brain function in neuromyelitis optica: A fMRI investigation of mPASAT.

    Science.gov (United States)

    Wang, Fei; Liu, Yaou; Li, Jianjun; Sondag, Matthew; Law, Meng; Zee, Chi-Shing; Dong, Huiqing; Li, Kuncheng

    2017-10-01

    Cognitive impairment with the Neuromyelitis Optica (NMO) patients is debated. The present study is to study patterns of brain activation in NMO patients during a pair of task-related fMRI. We studied 20 patients with NMO and 20 control subjects matched for age, gender, education and handedness. All patients with NMO met the 2006 Wingerchuk diagnostic criteria. The fMRI paradigm included an auditory attention monitoring task and a modified version of the Paced Auditory Serial Addition Task (mPASAT). Both tasks were temporally and spatially balanced, with the exception of task difficulty. In mPASAT, Activation regions in control subjects included bilateral superior temporal gyri (BA22), left inferior frontal gyrus (BA45), bilateral inferior parietal lobule (BA7), left cingulate gyrus (BA32), left insula (BA13), and cerebellum. Activation regions in NMO patients included bilateral superior temporal gyri (BA22), left inferior frontal gyrus (BA9), right cingulate gyrus (BA32), right inferior parietal gyrus (BA40), left insula (BA13) and cerebellum. Some dispersed cognition related regions are greater in the patients. The present study showed altered cerebral activation during mPASAT in patients with NMO relative to healthy controls. These results are speculated to provide further evidence for brain plasticity in patients with NMO. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  14. Decreased Cerebellar-Orbitofrontal Connectivity Correlates with Stuttering Severity: Whole-Brain Functional and Structural Connectivity Associations with Persistent Developmental Stuttering.

    Science.gov (United States)

    Sitek, Kevin R; Cai, Shanqing; Beal, Deryk S; Perkell, Joseph S; Guenther, Frank H; Ghosh, Satrajit S

    2016-01-01

    Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex (OFC). Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and OFC may underlie successful compensatory mechanisms by more fluent stutterers.

  15. Decreased Cerebellar-Orbitofrontal Connectivity Correlates with Stuttering Severity: Whole-Brain Functional and Structural Connectivity Associations with Persistent Developmental Stuttering

    Science.gov (United States)

    Sitek, Kevin R.; Cai, Shanqing; Beal, Deryk S.; Perkell, Joseph S.; Guenther, Frank H.; Ghosh, Satrajit S.

    2016-01-01

    Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex (OFC). Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and OFC may underlie successful compensatory mechanisms by more fluent stutterers. PMID:27199712

  16. Decreased cerebellar-orbitofrontal connectivity correlates with stuttering severity: Whole-brain functional and structural connectivity associations with persistent developmental stuttering

    Directory of Open Access Journals (Sweden)

    Kevin Richard Sitek

    2016-05-01

    Full Text Available Persistent developmental stuttering is characterized by speech production disfluency and affects 1% of adults. The degree of impairment varies widely across individuals and the neural mechanisms underlying the disorder and this variability remain poorly understood. Here, we elucidate compensatory mechanisms related to this variability in impairment using whole-brain functional and white matter connectivity analyses in persistent developmental stuttering. We found that people who stutter had stronger functional connectivity between cerebellum and thalamus than people with fluent speech, while stutterers with the least severe symptoms had greater functional connectivity between left cerebellum and left orbitofrontal cortex. Additionally, people who stutter had decreased functional and white matter connectivity among the perisylvian auditory, motor, and speech planning regions compared to typical speakers, but greater functional connectivity between the right basal ganglia and bilateral temporal auditory regions. Structurally, disfluency ratings were negatively correlated with white matter connections to left perisylvian regions and to the brain stem. Overall, we found increased connectivity among subcortical and reward network structures in people who stutter compared to controls. These connections were negatively correlated with stuttering severity, suggesting the involvement of cerebellum and orbitofrontal cortex may underlie successful compensatory mechanisms by more fluent stutterers.

  17. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  18. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  19. Cell-type specific short-term plasticity at auditory nerve synapses controls feed-forward inhibition in the dorsal cochlear nucleus

    Directory of Open Access Journals (Sweden)

    Miloslav eSedlacek

    2014-07-01

    Full Text Available Feedforward inhibition represents a powerful mechanism by which control of the timing and fidelity of action potentials in local synaptic circuits of various brain regions is achieved. In the cochlear nucleus, the auditory nerve provides excitation to both principal neurons and inhibitory interneurons. Here, we investigated the synaptic circuit associated with fusiform cells (FCs, principal neurons of the dorsal cochlear nucleus (DCN that receive excitation from auditory nerve fibers and inhibition from tuberculoventral cells (TVCs on their basal dendrites in the deep layer of DCN. Despite the importance of these inputs in regulating fusiform cell firing behavior, the mechanisms determining the balance of excitation and feed-forward inhibition in this circuit are not well understood. Therefore, we examined the timing and plasticity of auditory nerve driven feed-forward inhibition (FFI onto FCs. We find that in some FCs, excitatory and inhibitory components of feed-forward inhibition had the same stimulation thresholds indicating they could be triggered by activation of the same fibers. In other FCs, excitation and inhibition exhibit different stimulus thresholds, suggesting FCs and TVCs might be activated by different sets of fibers. In addition we find that during repetitive activation, synapses formed by the auditory nerve onto TVCs and FCs exhibit distinct modes of short-term plasticity. Feed-forward inhibitory post-synaptic currents (IPSCs in FCs exhibit short-term depression because of prominent synaptic depression at the auditory nerve-TVC synapse. Depression of this feedforward inhibitory input causes a shift in the balance of fusiform cell synaptic input towards greater excitation and suggests that fusiform cell spike output will be enhanced by physiological patterns of auditory nerve activity.

  20. Cell-type specific short-term plasticity at auditory nerve synapses controls feed-forward inhibition in the dorsal cochlear nucleus.

    Science.gov (United States)

    Sedlacek, Miloslav; Brenowitz, Stephan D

    2014-01-01

    Feed-forward inhibition (FFI) represents a powerful mechanism by which control of the timing and fidelity of action potentials in local synaptic circuits of various brain regions is achieved. In the cochlear nucleus, the auditory nerve provides excitation to both principal neurons and inhibitory interneurons. Here, we investigated the synaptic circuit associated with fusiform cells (FCs), principal neurons of the dorsal cochlear nucleus (DCN) that receive excitation from auditory nerve fibers and inhibition from tuberculoventral cells (TVCs) on their basal dendrites in the deep layer of DCN. Despite the importance of these inputs in regulating fusiform cell firing behavior, the mechanisms determining the balance of excitation and FFI in this circuit are not well understood. Therefore, we examined the timing and plasticity of auditory nerve driven FFI onto FCs. We find that in some FCs, excitatory and inhibitory components of FFI had the same stimulation thresholds indicating they could be triggered by activation of the same fibers. In other FCs, excitation and inhibition exhibit different stimulus thresholds, suggesting FCs and TVCs might be activated by different sets of fibers. In addition, we find that during repetitive activation, synapses formed by the auditory nerve onto TVCs and FCs exhibit distinct modes of short-term plasticity. Feed-forward inhibitory post-synaptic currents (IPSCs) in FCs exhibit short-term depression because of prominent synaptic depression at the auditory nerve-TVC synapse. Depression of this feedforward inhibitory input causes a shift in the balance of fusiform cell synaptic input towards greater excitation and suggests that fusiform cell spike output will be enhanced by physiological patterns of auditory nerve activity.

  1. Brain region specific mitophagy capacity could contribute to selective neuronal vulnerability in Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Zabel Claus

    2011-09-01

    Full Text Available Abstract Parkinson's disease (PD is histologically well defined by its characteristic degeneration of dopaminergic neurons in the substantia nigra pars compacta. Remarkably, divergent PD-related mutations can generate comparable brain region specific pathologies. This indicates that some intrinsic region-specificity respecting differential neuron vulnerability exists, which codetermines the disease progression. To gain insight into the pathomechanism of PD, we investigated protein expression and protein oxidation patterns of three different brain regions in a PD mouse model, the PINK1 knockout mice (PINK1-KO, in comparison to wild type control mice. The dysfunction of PINK1 presumably affects mitochondrial turnover by disturbing mitochondrial autophagic pathways. The three brain regions investigated are the midbrain, which is the location of substantia nigra; striatum, the major efferent region of substantia nigra; and cerebral cortex, which is more distal to PD pathology. In all three regions, mitochondrial proteins responsible for energy metabolism and membrane potential were significantly altered in the PINK1-KO mice, but with very different region specific accents in terms of up/down-regulations. This suggests that disturbed mitophagy presumably induced by PINK1 knockout has heterogeneous impacts on different brain regions. Specifically, the midbrain tissue seems to be most severely hit by defective mitochondrial turnover, whereas cortex and striatum could compensate for mitophagy nonfunction by feedback stimulation of other catabolic programs. In addition, cerebral cortex tissues showed the mildest level of protein oxidation in both PINK1-KO and wild type mice, indicating either a better oxidative protection or less reactive oxygen species (ROS pressure in this brain region. Ultra-structural histological examination in normal mouse brain revealed higher incidences of mitophagy vacuoles in cerebral cortex than in striatum and substantia

  2. [Correlation of auditory-verbal skills in patients with cochlear implants and their evaluation in positone emission tomography (PET)].

    Science.gov (United States)

    Łukaszewicz, Zuzanna; Soluch, Paweł; Niemczyk, Kazimierz; Lachowska, Magdalena

    2010-06-01

    An assumption was taken that in central nervous system (CNS) in patients above 15 years of age there are possible mechanisms of neuronal changes. Those changes allow for reconstruction or formation of natural activation pattern of appropriate brain structures responsible for auditory speech processing. The aim of the study was to observe if there are any dynamic functional changes in central nervous system and their correlation to the auditory-verbal skills of the patients. Nine right-handed patients between 15 and 36 years of age were examined, 6 females and 3 males. All of them were treated with cochlear implantation and are in frequent follow-up in the Department of Otolaryngology at the Medical University of Warsaw due to profound sensorineural hearing loss. In present study the patients were examined within 24 hours after the first fitting of the speech processor of the cochlear implant, and 1 and 2 years subsequently. Combination of performed examinations consisted of: positone emission tomography of the brain, and audiological tests including speech assessment. In the group of patients 4 were postlingually deaf, and 5 were prelinqually deaf. Postlingually deaf patients achieved great improvement of hearing and speech understanding. In their first PET examination very intensive activation of visual cortex V1 and V2 (BA17 and 18) was observed. There was no significant activation in the dominant (left) hemisphere of the brain. In PET examination performed 1 and 2 years after the cochlear implantation no more V1 and V2 activation region was observed. Instead particular regions of the left hemisphere got activated. In prelingually deaf patients no significant changes in central nervous system were noticeable neither in PET nor in speech assessment, although their hearing possibilities improved. Positive correlation was observed between the level of speech understanding, linguistic skills and the activation of appropriate areas of the left hemisphere of the brain

  3. Contrasting effects of vocabulary knowledge on temporal and parietal brain structure across lifespan.

    Science.gov (United States)

    Richardson, Fiona M; Thomas, Michael S C; Filippi, Roberto; Harth, Helen; Price, Cathy J

    2010-05-01

    Using behavioral, structural, and functional imaging techniques, we demonstrate contrasting effects of vocabulary knowledge on temporal and parietal brain structure in 47 healthy volunteers who ranged in age from 7 to 73 years. In the left posterior supramarginal gyrus, vocabulary knowledge was positively correlated with gray matter density in teenagers but not adults. This region was not activated during auditory or visual sentence processing, and activation was unrelated to vocabulary skills. Its gray matter density may reflect the use of an explicit learning strategy that links new words to lexical or conceptual equivalents, as used in formal education and second language acquisition. By contrast, in left posterior temporal regions, gray matter as well as auditory and visual sentence activation correlated with vocabulary knowledge throughout lifespan. We propose that these effects reflect the acquisition of vocabulary through context, when new words are learnt within the context of semantically and syntactically related words.

  4. Influence of ketamine on regional brain glucose use

    International Nuclear Information System (INIS)

    Davis, D.W.; Mans, A.M.; Biebuyck, J.F.; Hawkins, R.A.

    1988-01-01

    The purpose of this study was to determine the effect of different doses of ketamine on cerebral function at the level of individual brain structures as reflected by glucose use. Rats received either 5 or 30 mg/kg ketamine intravenously as a loading dose, followed by an infusion to maintain a steady-state level of the drug. An additional group received 30 mg/kg as a single injection only, and was studied 20 min later, by which time they were recovering consciousness (withdrawal group). Regional brain energy metabolism was evaluated with [6- 14 C]glucose and quantitative autoradiography during a 5-min experimental period. A subhypnotic, steady-state dose (5 mg/kg) of ketamine caused a stimulation of glucose use in most brain areas, with an average increase of 20%. At the larger steady-state dose (30 mg/kg, which is sufficient to cause anesthesia), there was no significant effect on most brain regions; some sensory nuclei were depressed (inferior colliculus, -29%; cerebellar dentate nucleus, -18%; vestibular nucleus, -16%), but glucose use in the ventral posterior hippocampus was increased by 33%. In contrast, during withdrawal from a 30-mg/kg bolus, there was a stimulation of glucose use throughout the brain (21-78%), at a time when plasma ketamine levels were similar to the levels in the 5 mg/kg group. At each steady-state dose, as well as during withdrawal, ketamine caused a notable stimulation of glucose use by the hippocampus

  5. [Some electrophysiological and hemodynamic characteristics of auditory selective attention in norm and schizophrenia].

    Science.gov (United States)

    Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A

    2011-01-01

    Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.

  6. Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.

    Science.gov (United States)

    Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G

    2016-02-01

    Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app.

  8. Prestimulus subsequent memory effects for auditory and visual events.

    Science.gov (United States)

    Otten, Leun J; Quayle, Angela H; Puvaneswaran, Bhamini

    2010-06-01

    It has been assumed that the effective encoding of information into memory primarily depends on neural activity elicited when an event is initially encountered. Recently, it has been shown that memory formation also relies on neural activity just before an event. The precise role of such activity in memory is currently unknown. Here, we address whether prestimulus activity affects the encoding of auditory and visual events, is set up on a trial-by-trial basis, and varies as a function of the type of recognition judgment an item later receives. Electrical brain activity was recorded from the scalps of 24 healthy young adults while they made semantic judgments on randomly intermixed series of visual and auditory words. Each word was preceded by a cue signaling the modality of the upcoming word. Auditory words were preceded by auditory cues and visual words by visual cues. A recognition memory test with remember/know judgments followed after a delay of about 45 min. As observed previously, a negative-going, frontally distributed modulation just before visual word onset predicted later recollection of the word. Crucially, the same effect was found for auditory words and observed on stay as well as switch trials. These findings emphasize the flexibility and general role of prestimulus activity in memory formation, and support a functional interpretation of the activity in terms of semantic preparation. At least with an unpredictable trial sequence, the activity is set up anew on each trial.

  9. Encoding of Sucrose's Palatability in the Nucleus Accumbens Shell and Its Modulation by Exteroceptive Auditory Cues

    Directory of Open Access Journals (Sweden)

    Miguel Villavicencio

    2018-05-01

    Full Text Available Although the palatability of sucrose is the primary reason for why it is over consumed, it is not well understood how it is encoded in the nucleus accumbens shell (NAcSh, a brain region involved in reward, feeding, and sensory/motor transformations. Similarly, untouched are issues regarding how an external auditory stimulus affects sucrose palatability and, in the NAcSh, the neuronal correlates of this behavior. To address these questions in behaving rats, we investigated how food-related auditory cues modulate sucrose's palatability. The goals are to determine whether NAcSh neuronal responses would track sucrose's palatability (as measured by the increase in hedonically positive oromotor responses lick rate, sucrose concentration, and how it processes auditory information. Using brief-access tests, we found that sucrose's palatability was enhanced by exteroceptive auditory cues that signal the start and the end of a reward epoch. With only the start cue the rejection of water was accelerated, and the sucrose/water ratio was enhanced, indicating greater palatability. However, the start cue also fragmented licking patterns and decreased caloric intake. In the presence of both start and stop cues, the animals fed continuously and increased their caloric intake. Analysis of the licking microstructure confirmed that auditory cues (either signaling the start alone or start/stop enhanced sucrose's oromotor-palatability responses. Recordings of extracellular single-unit activity identified several distinct populations of NAcSh responses that tracked either the sucrose palatability responses or the sucrose concentrations by increasing or decreasing their activity. Another neural population fired synchronously with licking and exhibited an enhancement in their coherence with increasing sucrose concentrations. The population of NAcSh's Palatability-related and Lick-Inactive neurons were the most important for decoding sucrose's palatability. Only the Lick

  10. Regional apparent diffusion coefficient values in 3rd trimester fetal brain

    International Nuclear Information System (INIS)

    Hoffmann, Chen; Weisz, Boaz; Lipitz, Shlomo; Katorza, Eldad; Yaniv, Gal; Bergman, Dafi; Biegon, Anat

    2014-01-01

    Apparent diffusion coefficient (ADC) values in the developing fetus can be used in the diagnosis and prognosis of prenatal brain pathologies. To this end, we measured regional ADC in a relatively large cohort of normal fetal brains in utero. Diffusion-weighted imaging (DWI) was performed in 48 non-sedated 3rd trimester fetuses with normal structural MR imaging results. ADC was measured in white matter (frontal, parietal, temporal, and occipital lobes), basal ganglia, thalamus, pons, and cerebellum. Regional ADC values were compared by one-way ANOVA with gestational age as covariate. Regression analysis was used to examine gestational age-related changes in regional ADC. Four other cases of CMV infection were also examined. Median gestational age was 32 weeks (range, 26-33 weeks). There was a highly significant effect of region on ADC, whereby ADC values were highest in white matter, with significantly lower values in basal ganglia and cerebellum and the lowest values in thalamus and pons. ADC did not significantly change with gestational age in any of the regions tested. In the four cases with fetal CMV infection, ADC value was associated with a global decrease. ADC values in normal fetal brain are relatively stable during the third trimester, show consistent regional variation, and can make an important contribution to the early diagnosis and possibly prognosis of fetal brain pathologies. (orig.)

  11. Regional apparent diffusion coefficient values in 3rd trimester fetal brain

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmann, Chen [Tel Aviv University, Department of Radiology, Sheba Medical Center, Tel Hashomer (affiliated to the Sackler School of Medicine), Tel Aviv (Israel); Sheba Medical Center, Diagnostic Imaging, 52621, Tel Hashomer (Israel); Weisz, Boaz; Lipitz, Shlomo; Katorza, Eldad [Tel Aviv University, Department of Obstetrics and Gynecology, Sheba Medical Center, Tel Hashomer (affiliated to the Sackler School of Medicine), Tel Aviv (Israel); Yaniv, Gal; Bergman, Dafi [Tel Aviv University, Department of Radiology, Sheba Medical Center, Tel Hashomer (affiliated to the Sackler School of Medicine), Tel Aviv (Israel); Biegon, Anat [Stony Brook University School of Medicine, Department of Neurology, Stony Brook, NY (United States)

    2014-07-15

    Apparent diffusion coefficient (ADC) values in the developing fetus can be used in the diagnosis and prognosis of prenatal brain pathologies. To this end, we measured regional ADC in a relatively large cohort of normal fetal brains in utero. Diffusion-weighted imaging (DWI) was performed in 48 non-sedated 3rd trimester fetuses with normal structural MR imaging results. ADC was measured in white matter (frontal, parietal, temporal, and occipital lobes), basal ganglia, thalamus, pons, and cerebellum. Regional ADC values were compared by one-way ANOVA with gestational age as covariate. Regression analysis was used to examine gestational age-related changes in regional ADC. Four other cases of CMV infection were also examined. Median gestational age was 32 weeks (range, 26-33 weeks). There was a highly significant effect of region on ADC, whereby ADC values were highest in white matter, with significantly lower values in basal ganglia and cerebellum and the lowest values in thalamus and pons. ADC did not significantly change with gestational age in any of the regions tested. In the four cases with fetal CMV infection, ADC value was associated with a global decrease. ADC values in normal fetal brain are relatively stable during the third trimester, show consistent regional variation, and can make an important contribution to the early diagnosis and possibly prognosis of fetal brain pathologies. (orig.)

  12. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  13. Delayed Mismatch Field Latencies in Autism Spectrum Disorder with Abnormal Auditory Sensitivity: A Magnetoencephalographic Study.

    Science.gov (United States)

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Sugata, Hisato; Hanaie, Ryuzo; Nagatani, Fumiyo; Yamamoto, Tomoka; Tachibana, Masaya; Tominaga, Koji; Hirata, Masayuki; Mohri, Ikuko; Taniike, Masako

    2017-01-01

    Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD), the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG) to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz) in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years) and 13 typically developing boys (mean age, 9.45 ± 1.51 years). We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.

  14. Subcortical pathways: Towards a better understanding of auditory disorders.

    Science.gov (United States)

    Felix, Richard A; Gourévitch, Boris; Portfors, Christine V

    2018-05-01

    Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    OpenAIRE

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realti...

  16. Effects of background music on objective and subjective performance measures in an auditory BCI

    Directory of Open Access Journals (Sweden)

    Sijie Zhou

    2016-10-01

    Full Text Available Several studies have explored brain computer interface (BCI systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  17. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  18. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    Science.gov (United States)

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Tae, Woo Suk; Yakunina, Natalia; Nam, Eui-Cheol; Kim, Tae Su; Kim, Sam Soo

    2014-01-01

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  20. Age- and brain region-dependent α-synuclein oligomerization is attributed to alterations in intrinsic enzymes regulating α-synuclein phosphorylation in aging monkey brains.

    Science.gov (United States)

    Chen, Min; Yang, Weiwei; Li, Xin; Li, Xuran; Wang, Peng; Yue, Feng; Yang, Hui; Chan, Piu; Yu, Shun

    2016-02-23

    We previously reported that the levels of α-syn oligomers, which play pivotal pathogenic roles in age-related Parkinson's disease (PD) and dementia with Lewy bodies, increase heterogeneously in the aging brain. Here, we show that exogenous α-syn incubated with brain extracts from older cynomolgus monkeys and in Lewy body pathology (LBP)-susceptible brain regions (striatum and hippocampus) forms higher amounts of phosphorylated and oligomeric α-syn than that in extracts from younger monkeys and LBP-insusceptible brain regions (cerebellum and occipital cortex). The increased α-syn phosphorylation and oligomerization in the brain extracts from older monkeys and in LBP-susceptible brain regions were associated with higher levels of polo-like kinase 2 (PLK2), an enzyme promoting α-syn phosphorylation, and lower activity of protein phosphatase 2A (PP2A), an enzyme inhibiting α-syn phosphorylation, in these brain extracts. Further, the extent of the age- and brain-dependent increase in α-syn phosphorylation and oligomerization was reduced by inhibition of PLK2 and activation of PP2A. Inversely, phosphorylated α-syn oligomers reduced the activity of PP2A and showed potent cytotoxicity. In addition, the activity of GCase and the levels of ceramide, a product of GCase shown to activate PP2A, were lower in brain extracts from older monkeys and in LBP-susceptible brain regions. Our results suggest a role for altered intrinsic metabolic enzymes in age- and brain region-dependent α-syn oligomerization in aging brains.

  1. Brain correlates of automatic visual change detection.

    Science.gov (United States)

    Cléry, H; Andersson, F; Fonlupt, P; Gomot, M

    2013-07-15

    A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    Science.gov (United States)

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although

  3. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities.

    Science.gov (United States)

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks

  4. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities

    Directory of Open Access Journals (Sweden)

    Valerio Santangelo

    2018-02-01

    Full Text Available Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010 to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory in one spatial location. The analysis of the independent components (ICs revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC. The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among

  5. Modulation of electric brain responses evoked by pitch deviants through transcranial direct current stimulation.

    Science.gov (United States)

    Royal, Isabelle; Zendel, Benjamin Rich; Desjardins, Marie-Ève; Robitaille, Nicolas; Peretz, Isabelle

    2018-01-31

    Congenital amusia is a neurodevelopmental disorder, characterized by a difficulty detecting pitch deviation that is related to abnormal electrical brain responses. Abnormalities found along the right fronto-temporal pathway between the inferior frontal gyrus (IFG) and the auditory cortex (AC) are the likely neural mechanism responsible for amusia. To investigate the causal role of these regions during the detection of pitch deviants, we applied cathodal (inhibitory) transcranial direct current stimulation (tDCS) over right frontal and right temporal regions during separate testing sessions. We recorded participants' electrical brain activity (EEG) before and after tDCS stimulation while they performed a pitch change detection task. Relative to a sham condition, there was a decrease in P3 amplitude after cathodal stimulation over both frontal and temporal regions compared to pre-stimulation baseline. This decrease was associated with small pitch deviations (6.25 cents), but not large pitch deviations (200 cents). Overall, this demonstrates that using tDCS to disrupt regions around the IFG and AC can induce temporary changes in evoked brain activity when processing pitch deviants. These electrophysiological changes are similar to those observed in amusia and provide causal support for the connection between P3 and fronto-temporal brain regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Neuropeptide processing in regional brain slices: Effect of conformation and sequence

    Energy Technology Data Exchange (ETDEWEB)

    Li, Z.W.; Bijl, W.A.; van Nispen, J.W.; Brendel, K.; Davis, T.P. (Univ. of Arizona, Tucson (USA))

    1990-05-01

    The central enzymatic stability of des-enkephalin-gamma-endorphin and its synthetic analogs (cycloN alpha 6, C delta 11)beta-endorphin-(6-17) and (Pro7, Lys(Ac)9)-beta-endorphin(6-17) was studied in vitro using a newly developed, regionally dissected rat brain slice, time course incubation procedure. Tissue slice viability was estimated as the ability of the brain slice to take up or release gamma-(3H)aminobutyric acid after high K+ stimulation. Results demonstrated stability of uptake/release up to 5 hr of incubation, suggesting tissue viability over this period. The estimated half-life of peptides based on the results obtained in our incubation protocol suggest that the peptides studied are metabolized at different rates in the individual brain regions tested. A good correlation exists between the high enzyme activity of neutral endopeptidase and the rapid degradation of des-enkephalin-gamma-endorphin and (cycloN alpha 6, C delata 11)beta-endorphin-(6-17) in caudate putamen. Proline substitution combined with lysine acetylation appears to improve resistance to enzymatic metabolism in caudate putamen and hypothalamus. However, cyclization of des-enkephalin-gamma-endorphin forming an amide bond between the alpha-NH2 of the N-terminal threonine and the gamma-COOH of glutamic acid did not improve peptide stability in any brain region tested. The present study has shown that the brain slice technique is a valid and unique approach to study neuropeptide metabolism in small, discrete regions of rat brain where peptides, peptidases and receptors are colocalized and that specific structural modifications can improve peptide stability.

  7. Big Cat Coalitions: A Comparative Analysis of Regional Brain Volumes in Felidae.

    Science.gov (United States)

    Sakai, Sharleen T; Arsznov, Bradley M; Hristova, Ani E; Yoon, Elise J; Lundrigan, Barbara L

    2016-01-01

    Broad-based species comparisons across mammalian orders suggest a number of factors that might influence the evolution of large brains. However, the relationship between these factors and total and regional brain size remains unclear. This study investigated the relationship between relative brain size and regional brain volumes and sociality in 13 felid species in hopes of revealing relationships that are not detected in more inclusive comparative studies. In addition, a more detailed analysis was conducted of four focal species: lions ( Panthera leo ), leopards ( Panthera pardus ), cougars ( Puma concolor ), and cheetahs ( Acinonyx jubatus ). These species differ markedly in sociality and behavioral flexibility, factors hypothesized to contribute to increased relative brain size and/or frontal cortex size. Lions are the only truly social species, living in prides. Although cheetahs are largely solitary, males often form small groups. Both leopards and cougars are solitary. Of the four species, leopards exhibit the most behavioral flexibility, readily adapting to changing circumstances. Regional brain volumes were analyzed using computed tomography. Skulls ( n = 75) were scanned to create three-dimensional virtual endocasts, and regional brain volumes were measured using either sulcal or bony landmarks obtained from the endocasts or skulls. Phylogenetic least squares regression analyses found that sociality does not correspond with larger relative brain size in these species. However, the sociality/solitary variable significantly predicted anterior cerebrum (AC) volume, a region that includes frontal cortex. This latter finding is despite the fact that the two social species in our sample, lions and cheetahs, possess the largest and smallest relative AC volumes, respectively. Additionally, an ANOVA comparing regional brain volumes in four focal species revealed that lions and leopards, while not significantly different from one another, have relatively larger AC

  8. Big Cat Coalitions: A comparative analysis of regional brain volumes in Felidae

    Directory of Open Access Journals (Sweden)

    Sharleen T Sakai

    2016-10-01

    Full Text Available Broad-based species comparisons across mammalian orders suggest a number of factors that might influence the evolution of large brains. However, the relationship between these factors and total and regional brain size remains unclear. This study investigated the relationship between relative brain size and regional brain volumes and sociality in 13 felid species in hopes of revealing relationships that are not detected in more inclusive comparative studies. In addition, a more detailed analysis was conducted of 4 focal species: lions (Panthera leo, leopards (Panthera pardus, cougars (Puma concolor, and cheetahs (Acinonyx jubatus. These species differ markedly in sociality and behavioral flexibility, factors hypothesized to contribute to increased relative brain size and/or frontal cortex size. Lions are the only truly social species, living in prides. Although cheetahs are largely solitary, males often form small groups. Both leopards and cougars are solitary. Of the four species, leopards exhibit the most behavioral flexibility, readily adapting to changing circumstances. Regional brain volumes were analyzed using computed tomography (CT. Skulls (n=75 were scanned to create three-dimensional virtual endocasts, and regional brain volumes were measured using either sulcal or bony landmarks obtained from the endocasts or skulls. Phylogenetic least squares (PGLS regression analyses found that sociality does not correspond with larger relative brain size in these species. However, the sociality/solitary variable significantly predicted anterior cerebrum (AC volume, a region that includes frontal cortex. This latter finding is despite the fact that the two social species in our sample, lions and cheetahs, possess the largest and smallest relative AC volumes, respectively. Additionally, an ANOVA comparing regional brain volumes in 4 focal species revealed that lions and leopards, while not significantly different from one another, have relatively

  9. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  10. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  11. Bioacoustic Signal Classification in Cat Auditory Cortex

    Science.gov (United States)

    1994-01-01

    of the cat’s WINER. 1. A. Anatomy of layer IV in cat primary auditory cortex t4,1). J miedial geniculate body Ideintified by projections to binaural...34language" (see for example Tartter, 1986, chapter 8; and Lieberman, 1984). Attempts have been made to train animals (mainly apes, gorillas , _ _ ___I 3...gestures of a gorilla : Language acquisition in another Pongid. Brain and Language, 1978a, 5, 72-97. Patterson, F. Conversations with a gorilla

  12. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  13. A neural network model of normal and abnormal auditory information processing.

    Science.gov (United States)

    Du, X; Jansen, B H

    2011-08-01

    The ability of the brain to attenuate the response to irrelevant sensory stimulation is referred to as sensory gating. A gating deficiency has been reported in schizophrenia. To study the neural mechanisms underlying sensory gating, a neuroanatomically inspired model of auditory information processing has been developed. The mathematical model consists of lumped parameter modules representing the thalamus (TH), the thalamic reticular nucleus (TRN), auditory cortex (AC), and prefrontal cortex (PC). It was found that the membrane potential of the pyramidal cells in the PC module replicated auditory evoked potentials, recorded from the scalp of healthy individuals, in response to pure tones. Also, the model produced substantial attenuation of the response to the second of a pair of identical stimuli, just as seen in actual human experiments. We also tested the viewpoint that schizophrenia is associated with a deficit in prefrontal dopamine (DA) activity, which would lower the excitatory and inhibitory feedback gains in the AC and PC modules. Lowering these gains by less than 10% resulted in model behavior resembling the brain activity seen in schizophrenia patients, and replicated the reported gating deficits. The model suggests that the TRN plays a critical role in sensory gating, with the smaller response to a second tone arising from a reduction in inhibition of TH by the TRN. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Assessment of auditory cortical function in cochlear implant patients using 15O PET

    International Nuclear Information System (INIS)

    Young, J.P.; O'Sullivan, B.T.; Gibson, W.P.; Sefton, A.E.; Mitchell, T.E.; Sanli, H.; Cervantes, R.; Withall, A.; Royal Prince Alfred Hospital, Sydney,

    1998-01-01

    Full text: Cochlear implantation has been an extraordinarily successful method of restoring hearing and the potential for full language development in pre-lingually and post-lingually deaf individuals (Gibson 1996). Post-lingually deaf patients, who develop their hearing loss later in life, respond best to cochlear implantation within the first few years of their deafness, but are less responsive to implantation after several years of deafness (Gibson 1996). In pre-lingually deaf children, cochlear implantation is most effect in allowing the full development language skills when performed within a critical period, in the first 8 years of life. These clinical observations suggest considerable neural plasticity of the human auditory cortex in acquiring and retaining language skills (Gibson 1996, Buchwald 1990). Currently, electrocochleography is used to determine the integrity of the auditory pathways to the auditory cortex. However, the functional integrity of the auditory cortex cannot be determined by this method. We have defined the extent of activation of the auditory cortex and auditory association cortex in 6 normal controls and 6 cochlear implant patients using 15 O PET functional brain imaging methods. Preliminary results have indicated the potential clinical utility of 15 O PET cortical mapping in the pre-surgical assessment and post-surgical follow up of cochlear implant patients. Copyright (1998) Australian Neuroscience Society

  15. Listen, you are writing!Speeding up online spelling with a dynamic auditory BCI

    Directory of Open Access Journals (Sweden)

    Martijn eSchreuder

    2011-10-01

    Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.

  16. Functional-structural reorganisation of the neuronal network for auditory perception in subjects with unilateral hearing loss: Review of neuroimaging studies.

    Science.gov (United States)

    Heggdal, Peder O Laugen; Brännström, Jonas; Aarstad, Hans Jørgen; Vassbotn, Flemming S; Specht, Karsten

    2016-02-01

    This paper aims to provide a review of studies using neuroimaging to measure functional-structural reorganisation of the neuronal network for auditory perception after unilateral hearing loss. A literature search was performed in PubMed. Search criterions were peer reviewed original research papers in English completed by the 11th of March 2015. Twelve studies were found to use neuroimaging in subjects with unilateral hearing loss. An additional five papers not identified by the literature search were provided by a reviewer. Thus, a total of 17 studies were included in the review. Four different neuroimaging methods were used in these studies: Functional magnetic resonance imaging (fMRI) (n = 11), diffusion tensor imaging (DTI) (n = 4), T1/T2 volumetric images (n = 2), magnetic resonance spectroscopy (MRS) (n = 1). One study utilized two imaging methods (fMRI and T1 volumetric images). Neuroimaging techniques could provide valuable information regarding the effects of unilateral hearing loss on both auditory and non-auditory performance. fMRI-studies showing a bilateral BOLD-response in patients with unilateral hearing loss have not yet been followed by DTI studies confirming their microstructural correlates. In addition, the review shows that an auditory modality-specific deficit could affect multi-modal brain regions and their connections. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Brain noise is task dependent and region specific.

    Science.gov (United States)

    Misić, Bratislav; Mills, Travis; Taylor, Margot J; McIntosh, Anthony R

    2010-11-01

    The emerging organization of anatomical and functional connections during human brain development is thought to facilitate global integration of information. Recent empirical and computational studies have shown that this enhanced capacity for information processing enables a diversified dynamic repertoire that manifests in neural activity as irregularity and noise. However, transient functional networks unfold over multiple time, scales and the embedding of a particular region depends not only on development, but also on the manner in which sensory and cognitive systems are engaged. Here we show that noise is a facet of neural activity that is also sensitive to the task context and is highly region specific. Children (6-16 yr) and adults (20-41 yr) performed a one-back face recognition task with inverted and upright faces. Neuromagnetic activity was estimated at several hundred sources in the brain by applying a beamforming technique to the magnetoencephalogram (MEG). During development, neural activity became more variable across the whole brain, with most robust increases in medial parietal regions, such as the precuneus and posterior cingulate cortex. For young children and adults, activity evoked by upright faces was more variable and noisy compared with inverted faces, and this effect was reliable only in the right fusiform gyrus. These results are consistent with the notion that upright faces engender a variety of integrative neural computations, such as the relations among facial features and their holistic constitution. This study shows that transient changes in functional integration modulated by task demand are evident in the variability of regional neural activity.

  18. Resting-state brain activity in adult males who stutter.

    Directory of Open Access Journals (Sweden)

    Yun Xuan

    Full Text Available Although developmental stuttering has been extensively studied with structural and task-based functional magnetic resonance imaging (fMRI, few studies have focused on resting-state brain activity in this disorder. We investigated resting-state brain activity of stuttering subjects by analyzing the amplitude of low-frequency fluctuation (ALFF, region of interest (ROI-based functional connectivity (FC and independent component analysis (ICA-based FC. Forty-four adult males with developmental stuttering and 46 age-matched fluent male controls were scanned using resting-state fMRI. ALFF, ROI-based FCs and ICA-based FCs were compared between male stuttering subjects and fluent controls in a voxel-wise manner. Compared with fluent controls, stuttering subjects showed increased ALFF in left brain areas related to speech motor and auditory functions and bilateral prefrontal cortices related to cognitive control. However, stuttering subjects showed decreased ALFF in the left posterior language reception area and bilateral non-speech motor areas. ROI-based FC analysis revealed decreased FC between the posterior language area involved in the perception and decoding of sensory information and anterior brain area involved in the initiation of speech motor function, as well as increased FC within anterior or posterior speech- and language-associated areas and between the prefrontal areas and default-mode network (DMN in stuttering subjects. ICA showed that stuttering subjects had decreased FC in the DMN and increased FC in the sensorimotor network. Our findings support the concept that stuttering subjects have deficits in multiple functional systems (motor, language, auditory and DMN and in the connections between them.

  19. Resting-State Brain Activity in Adult Males Who Stutter

    Science.gov (United States)

    Zhu, Chaozhe; Wang, Liang; Yan, Qian; Lin, Chunlan; Yu, Chunshui

    2012-01-01

    Although developmental stuttering has been extensively studied with structural and task-based functional magnetic resonance imaging (fMRI), few studies have focused on resting-state brain activity in this disorder. We investigated resting-state brain activity of stuttering subjects by analyzing the amplitude of low-frequency fluctuation (ALFF), region of interest (ROI)-based functional connectivity (FC) and independent component analysis (ICA)-based FC. Forty-four adult males with developmental stuttering and 46 age-matched fluent male controls were scanned using resting-state fMRI. ALFF, ROI-based FCs and ICA-based FCs were compared between male stuttering subjects and fluent controls in a voxel-wise manner. Compared with fluent controls, stuttering subjects showed increased ALFF in left brain areas related to speech motor and auditory functions and bilateral prefrontal cortices related to cognitive control. However, stuttering subjects showed decreased ALFF in the left posterior language reception area and bilateral non-speech motor areas. ROI-based FC analysis revealed decreased FC between the posterior language area involved in the perception and decoding of sensory information and anterior brain area involved in the initiation of speech motor function, as well as increased FC within anterior or posterior speech- and language-associated areas and between the prefrontal areas and default-mode network (DMN) in stuttering subjects. ICA showed that stuttering subjects had decreased FC in the DMN and increased FC in the sensorimotor network. Our findings support the concept that stuttering subjects have deficits in multiple functional systems (motor, language, auditory and DMN) and in the connections between them. PMID:22276215

  20. The spectrotemporal filter mechanism of auditory selective attention

    Science.gov (United States)

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  1. Exosomal biomarkers of brain insulin resistance associated with regional atrophy in Alzheimer's disease.

    Science.gov (United States)

    Mullins, Roger J; Mustapic, Maja; Goetzl, Edward J; Kapogiannis, Dimitrios

    2017-04-01

    Brain insulin resistance (IR), which depends on insulin-receptor-substrate-1 (IRS-1) phosphorylation, is characteristic of Alzheimer's disease (AD). Previously, we demonstrated higher pSer312-IRS-1 (ineffective insulin signaling) and lower p-panTyr-IRS-1 (effective insulin signaling) in neural origin-enriched plasma exosomes of AD patients vs. Here, we hypothesized that these exosomal biomarkers associate with brain atrophy in AD. We studied 24 subjects with biomarker-supported probable AD (low CSF Aβ 42 ). Exosomes were isolated from plasma, enriched for neural origin using immunoprecipitation for L1CAM, and measured for pSer 312 - and p-panTyr-IRS-1 phosphotypes. MPRAGE images were segmented by brain tissue type and voxel-based morphometry (VBM) analysis for gray matter against pSer 312 - and p-panTyr-IRS-1 was conducted. Given the regionally variable brain expression of IRS-1, we used the Allen Brain Atlas to make spatial comparisons between VBM results and IRS-1 expression. Brain volume was positively associated with P-panTyr-IRS-1 and negatively associated with pSer 312 -IRS-1 in a strikingly similar regional pattern (bilateral parietal-occipital junction, R middle temporal gyrus). This volumetric association pattern was spatially correlated with Allen Human Brain atlas normal brain IRS-1 expression. Exosomal biomarkers of brain IR are thus associated with atrophy in AD as could be expected by their pathophysiological roles and do so in a pattern that reflects regional IRS-1 expression. Furthermore, neural-origin plasma exosomes may recover molecular signals from specific brain regions. Hum Brain Mapp 38:1933-1940, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance

    Science.gov (United States)

    Muller, Viktor; Lindenberger, Ulman

    2012-01-01

    Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…

  3. Spatial auditory attention is modulated by tactile priming.

    Science.gov (United States)

    Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus

    2005-07-01

    Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.

  4. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  5. Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex.

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jérémie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaëtan; Aguera, Pierre-Emmanuel; Daligault, Sébastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-05-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch

  6. Regional infant brain development: an MRI-based morphometric analysis in 3 to 13 month olds.

    Science.gov (United States)

    Choe, Myong-Sun; Ortiz-Mantilla, Silvia; Makris, Nikos; Gregas, Matt; Bacic, Janine; Haehn, Daniel; Kennedy, David; Pienaar, Rudolph; Caviness, Verne S; Benasich, April A; Grant, P Ellen

    2013-09-01

    Elucidation of infant brain development is a critically important goal given the enduring impact of these early processes on various domains including later cognition and language. Although infants' whole-brain growth rates have long been available, regional growth rates have not been reported systematically. Accordingly, relatively less is known about the dynamics and organization of typically developing infant brains. Here we report global and regional volumetric growth of cerebrum, cerebellum, and brainstem with gender dimorphism, in 33 cross-sectional scans, over 3 to 13 months, using T1-weighted 3-dimensional spoiled gradient echo images and detailed semi-automated brain segmentation. Except for the midbrain and lateral ventricles, all absolute volumes of brain regions showed significant growth, with 6 different patterns of volumetric change. When normalized to the whole brain, the regional increase was characterized by 5 differential patterns. The putamen, cerebellar hemispheres, and total cerebellum were the only regions that showed positive growth in the normalized brain. Our results show region-specific patterns of volumetric change and contribute to the systematic understanding of infant brain development. This study greatly expands our knowledge of normal development and in future may provide a basis for identifying early deviation above and beyond normative variation that might signal higher risk for neurological disorders.

  7. Speech processing asymmetry revealed by dichotic listening and functional brain imaging.

    Science.gov (United States)

    Hugdahl, Kenneth; Westerhausen, René

    2016-12-01

    In this article, we review research in our laboratory from the last 25 to 30 years on the neuronal basis for laterality of speech perception focusing on the upper, posterior parts of the temporal lobes, and its functional and structural connections to other brain regions. We review both behavioral and brain imaging data, with a focus on dichotic listening experiments, and using a variety of imaging modalities. The data have come in most parts from healthy individuals and from studies on normally functioning brain, although we also review a few selected clinical examples. We first review and discuss the structural model for the explanation of the right-ear advantage (REA) and left hemisphere asymmetry for auditory language processing. A common theme across many studies have been our interest in the interaction between bottom-up, stimulus-driven, and top-down, instruction-driven, aspects of hemispheric asymmetry, and how perceptual factors interact with cognitive factors to shape asymmetry of auditory language information processing. In summary, our research have shown laterality for the initial processing of consonant-vowel syllables, first observed as a behavioral REA when subjects are required to report which syllable of a dichotic syllable-pair they perceive. In subsequent work we have corroborated the REA with brain imaging, and have shown that the REA is modulated through both bottom-up manipulations of stimulus properties, like sound intensity, and top-down manipulations of cognitive properties, like attention focus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Measurement of human advanced brain function in calculation processing using functional magnetic resonance imaging (fMRI)

    International Nuclear Information System (INIS)

    Hashida, Masahiro; Yamauchi, Syuichi; Wu, Jing-Long

    2001-01-01

    Using functional magnetic resonance imaging (fMRI), we investigated the activated areas of the human brain related with calculation processing as an advanced function of the human brain. Furthermore, we investigated differences in activation between visual and auditory calculation processing. The eight subjects (all healthy men) were examined on a clinical MR unit (1.5 tesla) with a gradient echo-type EPI sequence. SPM99 software was used for data processing. Arithmetic problems were used for the visual stimulus (visual image) as well as for the auditory stimulus (audible voice). The stimuli were presented to the subjects as follows: no stimulation, presentation of random figures, and presentation of arithmetic problems. Activated areas of the human brain related with calculation processing were the inferior parietal lobule, middle frontal gyrus, and inferior frontal gyrus. Comparing the arithmetic problems with the presentation of random figures, we found that the activated areas of the human brain were not differently affected by visual and auditory systems. The areas activated in the visual and auditory experiments were observed at nearly the same place in the brain. It is possible to study advanced functions of the human brain such as calculation processing in a general clinical hospital when adequate tasks and methods of presentation are used. (author)

  9. Neuronal effects of nicotine during auditory selective attention.

    Science.gov (United States)

    Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R

    2015-06-01

    Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.

  10. Delayed Mismatch Field Latencies in Autism Spectrum Disorder with Abnormal Auditory Sensitivity: A Magnetoencephalographic Study

    Directory of Open Access Journals (Sweden)

    Junko Matsuzaki

    2017-09-01

    Full Text Available Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD, the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years and 13 typically developing boys (mean age, 9.45 ± 1.51 years. We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.

  11. Real-time classification of auditory sentences using evoked cortical activity in humans

    Science.gov (United States)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  12. Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation

    Science.gov (United States)

    Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr

    2017-12-01

    Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the

  13. Region-specific protein misfolding cyclic amplification reproduces brain tropism of prion strains.

    Science.gov (United States)

    Privat, Nicolas; Levavasseur, Etienne; Yildirim, Serfildan; Hannaoui, Samia; Brandel, Jean-Philippe; Laplanche, Jean-Louis; Béringue, Vincent; Seilhean, Danielle; Haïk, Stéphane

    2017-10-06

    Human prion diseases such as Creutzfeldt-Jakob disease are transmissible brain proteinopathies, characterized by the accumulation of a misfolded isoform of the host cellular prion protein (PrP) in the brain. According to the prion model, prions are defined as proteinaceous infectious particles composed solely of this abnormal isoform of PrP (PrP Sc ). Even in the absence of genetic material, various prion strains can be propagated in experimental models. They can be distinguished by the pattern of disease they produce and especially by the localization of PrP Sc deposits within the brain and the spongiform lesions they induce. The mechanisms involved in this strain-specific targeting of distinct brain regions still are a fundamental, unresolved question in prion research. To address this question, we exploited a prion conversion in vitro assay, protein misfolding cyclic amplification (PMCA), by using experimental scrapie and human prion strains as seeds and specific brain regions from mice and humans as substrates. We show here that region-specific PMCA in part reproduces the specific brain targeting observed in experimental, acquired, and sporadic Creutzfeldt-Jakob diseases. Furthermore, we provide evidence that, in addition to cellular prion protein, other region- and species-specific molecular factors influence the strain-dependent prion conversion process. This important step toward understanding prion strain propagation in the human brain may impact research on the molecular factors involved in protein misfolding and the development of ultrasensitive methods for diagnosing prion disease. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  14. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia.

    Science.gov (United States)

    Ewing, Samuel G; Grace, Anthony A

    2013-02-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Regional differences in brain glucose metabolism determined by imaging mass spectrometry

    OpenAIRE

    André Kleinridders; Heather A. Ferris; Michelle L. Reyzer; Michaela Rath; Marion Soto; M. Lisa Manier; Jeffrey Spraggins; Zhihong Yang; Robert C. Stanton; Richard M. Caprioli; C. Ronald Kahn

    2018-01-01

    Objective: Glucose is the major energy substrate of the brain and crucial for normal brain function. In diabetes, the brain is subject to episodes of hypo- and hyperglycemia resulting in acute outcomes ranging from confusion to seizures, while chronic metabolic dysregulation puts patients at increased risk for depression and Alzheimer's disease. In the present study, we aimed to determine how glucose is metabolized in different regions of the brain using imaging mass spectrometry (IMS). Metho...

  16. Identification enhancement of auditory evoked potentials in EEG by epoch concatenation and temporal decorrelation.

    Science.gov (United States)

    Zavala-Fernandez, H; Orglmeister, R; Trahms, L; Sander, T H

    2012-12-01

    Event-related potentials (ERP) recorded by electroencephalography (EEG) are brain responses following an external stimulus, e.g., a sound or an image. They are used in fundamental cognitive research and neurological and psychiatric clinical research. ERPs are weaker than spontaneous brain activity and therefore it is difficult or even impossible to identify an ERP in the brain activity following an individual stimulus. For this reason, a blind source separation method relying on statistical information is proposed for the isolation of ERP after auditory stimulation. In this paper it is suggested to integrate epoch concatenation into the popular temporal decorrelation algorithm SOBI/TDSEP relying on time shifted correlations. With the proposed epoch concatenation temporal decorrelation (ecTD) algorithm a component representing the auditory evoked potential (AEP) is found in electroencephalographic data from an auditory stimulation experiment lasting 3min. The ecTD result is compared with the averaged AEP and it is superior to the result from the SOBI/TDSEP algorithm. Furthermore the ecTD processing leads to significant increases in the signal-to-noise ratio (shape SNR) of the AEP and reduces the computation time by 50% if compared to the SOBI/TDSEP calculation. It can be concluded that data concatenation in combination with temporal decorrelation is useful for isolating and improving the properties of an AEP especially in a short duration stimulation experiment. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Hierarchical clustering into groups of human brain regions according to elemental composition

    International Nuclear Information System (INIS)

    Stedman, J.D.; Spyrou, N.M.

    1998-01-01

    Thirteen brain regions were dissected from both hemispheres of fifteen 'normal' ageing subjects (8 females, 7 males) of mean age 79±7 years. Elemental compositions were determined by simultaneous application of particle induced X-ray emission (PIXE) and Rutherford backscattering (RBS) analyses using a 2 MeV, 4 nA proton beam scanned over 4 mm 2 of the sample surface. Elemental concentrations were found to be dependent upon the brain region and hemisphere studied. Hierarchical cluster analysis was applied to group the brain regions according to the sample concentrations of eight elements. The resulting dendrogram is presented and its clusters related to the sample compositions of grey and white matter. (author)

  18. Functional photoacoustic imaging to observe regional brain activation induced by cocaine hydrochloride

    Science.gov (United States)

    Jo, Janggun; Yang, Xinmai

    2011-09-01

    Photoacoustic microscopy (PAM) was used to detect small animal brain activation in response to drug abuse. Cocaine hydrochloride in saline solution was injected into the blood stream of Sprague Dawley rats through tail veins. The rat brain functional change in response to the injection of drug was then monitored by the PAM technique. Images in the coronal view of the rat brain at the locations of 1.2 and 3.4 mm posterior to bregma were obtained. The resulted photoacoustic (PA) images showed the regional changes in the blood volume. Additionally, the regional changes in blood oxygenation were also presented. The results demonstrated that PA imaging is capable of monitoring regional hemodynamic changes induced by drug abuse.

  19. From sensory to long-term memory: evidence from auditory memory reactivation studies.

    Science.gov (United States)

    Winkler, István; Cowan, Nelson

    2005-01-01

    Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.

  20. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  1. From ear to body: the auditory-motor loop in spatial cognition.

    Science.gov (United States)

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.

  2. From ear to body: the auditory-motor loop in spatial cognition

    Directory of Open Access Journals (Sweden)

    Isabelle eViaud-Delmon

    2014-09-01

    Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.

  3. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  4. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  5. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  6. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  7. Auditory opportunity and visual constraint enabled the evolution of echolocation in bats

    DEFF Research Database (Denmark)

    Thiagavel, Jeneni; Cechetto, Clément; Santana, Sharlene E

    2018-01-01

    and flight. Here we consider anatomical constraints and opportunities that led to a sonar rather than vision-based solution. We show that bats' common ancestor had eyes too small to allow for successful aerial hawking of flying insects at night, but an auditory brain design sufficient to afford echolocation...

  8. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  9. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    Science.gov (United States)

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  10. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Science.gov (United States)

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  11. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Directory of Open Access Journals (Sweden)

    R Channing Moore

    Full Text Available Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  12. A probabilistic approach to delineating functional brain regions

    DEFF Research Database (Denmark)

    Kalbitzer, Jan; Svarer, Claus; Frokjaer, Vibe G

    2009-01-01

    The purpose of this study was to develop a reliable observer-independent approach to delineating volumes of interest (VOIs) for functional brain regions that are not identifiable on structural MR images. The case is made for the raphe nuclei, a collection of nuclei situated in the brain stem known...... to be densely packed with serotonin transporters (5-hydroxytryptaminic [5-HTT] system). METHODS: A template set for the raphe nuclei, based on their high content of 5-HTT as visualized in parametric (11)C-labeled 3-amino-4-(2-dimethylaminomethyl-phenylsulfanyl)-benzonitrile PET images, was created for 10...... healthy subjects. The templates were subsequently included in the region sets used in a previously published automatic MRI-based approach to create an observer- and activity-independent probabilistic VOI map. The probabilistic map approach was tested in a different group of 10 subjects and compared...

  13. Modeling the developmental patterns of auditory evoked magnetic fields in children.

    Directory of Open Access Journals (Sweden)

    Rupesh Kotecha

    Full Text Available BACKGROUND: As magnetoencephalography (MEG is of increasing utility in the assessment of deficits and development delays in brain disorders in pediatrics, it becomes imperative to fully understand the functional development of the brain in children. METHODOLOGY: The present study was designed to characterize the developmental patterns of auditory evoked magnetic responses with respect to age and gender. Sixty children and twenty adults were studied with a 275-channel MEG system. CONCLUSIONS: Three main responses were identified at approximately 46 ms (M50, 71 ms (M70 and 106 ms (M100 in latency for children. The latencies of M70 and M100 shortened with age in both hemispheres; the latency of M50 shortened with age only in the right hemisphere. Analysis of developmental lateralization patterns in children showed that the latency of the right hemispheric evoked responses shortened faster than the corresponding left hemispheric responses. The latency of M70 in the right hemisphere highly correlated to the age of the child. The amplitudes of the M70 responses increased with age and reached their peaks in children 12-14 years of age, after which they decreased with age. The source estimates for the M50 and M70 responses indicated that they were generated in different subareas in the Heschl's gyrus in children, while not localizable in adults. Furthermore, gender also affected developmental patterns. The latency of M70 in the right hemisphere was proposed to be an index of auditory development in children, the modeling equation is 85.72-1.240xAge (yrs. Our results demonstrate that there is a clear developmental pattern in the auditory cortex and underscore the importance of M50 and M70 in the developing brain.

  14. Chronic auditory hallucinations in schizophrenic patients: MR analysis of the coincidence between functional and morphologic abnormalities.

    Science.gov (United States)

    Martí-Bonmatí, Luis; Lull, Juan José; García-Martí, Gracián; Aguilar, Eduardo J; Moratal-Pérez, David; Poyatos, Cecilio; Robles, Montserrat; Sanjuán, Julio

    2007-08-01

    To prospectively evaluate if functional magnetic resonance (MR) imaging abnormalities associated with auditory emotional stimuli coexist with focal brain reductions in schizophrenic patients with chronic auditory hallucinations. Institutional review board approval was obtained and all participants gave written informed consent. Twenty-one right-handed male patients with schizophrenia and persistent hallucinations (started to hear hallucinations at a mean age of 23 years +/- 10, with 15 years +/- 8 of mean illness duration) and 10 healthy paired participants (same ethnic group [white], age, and education level [secondary school]) were studied. Functional echo-planar T2*-weighted (after both emotional and neutral auditory stimulation) and morphometric three-dimensional gradient-recalled echo T1-weighted MR images were analyzed using Statistical Parametric Mapping (SPM2) software. Brain activation images were extracted by subtracting those with emotional from nonemotional words. Anatomic differences were explored by optimized voxel-based morphometry. The functional and morphometric MR images were overlaid to depict voxels statistically reported by both techniques. A coincidence map was generated by multiplying the emotional subtracted functional MR and volume decrement morphometric maps. Statistical analysis used the general linear model, Student t tests, random effects analyses, and analysis of covariance with a correction for multiple comparisons following the false discovery rate method. Large coinciding brain clusters (P < .005) were found in the left and right middle temporal and superior temporal gyri. Smaller coinciding clusters were found in the left posterior and right anterior cingular gyri, left inferior frontal gyrus, and middle occipital gyrus. The middle and superior temporal and the cingular gyri are closely related to the abnormal neural network involved in the auditory emotional dysfunction seen in schizophrenic patients.

  15. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  16. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Congenital Deafness Reduces, But Does Not Eliminate Auditory Responsiveness in Cat Extrastriate Visual Cortex.

    Science.gov (United States)

    Land, Rüdiger; Radecke, Jan-Ole; Kral, Andrej

    2018-04-01

    Congenital deafness not only affects the development of the auditory cortex, but also the interrelation between the visual and auditory system. For example, congenital deafness leads to visual modulation of the deaf auditory cortex in the form of cross-modal plasticity. Here we asked, whether congenital deafness additionally affects auditory modulation in the visual cortex. We demonstrate that auditory activity, which is normally present in the lateral suprasylvian visual areas in normal hearing cats, can also be elicited by electrical activation of the auditory system with cochlear implants. We then show that in adult congenitally deaf cats auditory activity in this region was reduced when tested with cochlear implant stimulation. However, the change in this area was small and auditory activity was not completely abolished despite years of congenital deafness. The results document that congenital deafness leads not only to changes in the auditory cortex but also affects auditory modulation of visual areas. However, the results further show a persistence of fundamental cortical sensory functional organization despite congenital deafness. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  19. Neural correlates of accelerated auditory processing in children engaged in music training.

    Science.gov (United States)

    Habibi, Assal; Cahn, B Rael; Damasio, Antonio; Damasio, Hanna

    2016-10-01

    Several studies comparing adult musicians and non-musicians have shown that music training is associated with brain differences. It is unknown, however, whether these differences result from lengthy musical training, from pre-existing biological traits, or from social factors favoring musicality. As part of an ongoing 5-year longitudinal study, we investigated the effects of a music training program on the auditory development of children, over the course of two years, beginning at age 6-7. The training was group-based and inspired by El-Sistema. We compared the children in the music group with two comparison groups of children of the same socio-economic background, one involved in sports training, another not involved in any systematic training. Prior to participating, children who began training in music did not differ from those in the comparison groups in any of the assessed measures. After two years, we now observe that children in the music group, but not in the two comparison groups, show an enhanced ability to detect changes in tonal environment and an accelerated maturity of auditory processing as measured by cortical auditory evoked potentials to musical notes. Our results suggest that music training may result in stimulus specific brain changes in school aged children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Differential susceptibility of brain regions to tributyltin chloride toxicity.

    Science.gov (United States)

    Mitra, Sumonto; Siddiqui, Waseem A; Khandelwal, Shashi

    2015-12-01

    Tributyltin (TBT), a well-known endocrine disruptor, is an omnipresent environmental pollutant and is explicitly used in many industrial applications. Previously we have shown its neurotoxic potential on cerebral cortex of male Wistar rats. As the effect of TBT on other brain regions is not known, we planned this study to evaluate its effect on four brain regions (cerebellum, hippocampus, hypothalamus, and striatum). Four-week-old male Wistar rats were gavaged with a single dose of TBT-chloride (TBTC) (10, 20, and 30 mg/kg) and sacrificed on days 3 and 7, respectively. Effect of TBTC on blood-brain barrier (BBB) permeability and tin (Sn) accumulation were measured. Oxidative stress indexes such as reactive oxygen species (ROS), reduced and oxidized glutathione (GSH/GSSG) ratio, lipid peroxidation, and protein carbonylation were analyzed as they play an imperative role in various neuropathological conditions. Since metal catalyzed reactions are a major source of oxidant generation, levels of essential metals like iron (Fe), zinc (Zn), and calcium (Ca) were estimated. We found that TBTC disrupted BBB and increased Sn accumulation, both of which appear significantly correlated. Altered metal homeostasis and ROS generation accompanied by elevated lipid peroxidation and protein carbonylation indicated oxidative damage which appeared more pronounced in the striatum than in cerebellum, hippocampus, and hypothalamus. This could be associated to the depleted GSH levels in striatum. These results suggest that striatum is more susceptible to TBTC induced oxidative damage as compared with other brain regions under study. © 2014 Wiley Periodicals, Inc.

  1. High resolution computed tomography of auditory ossicles

    International Nuclear Information System (INIS)

    Isono, M.; Murata, K.; Ohta, F.; Yoshida, A.; Ishida, O.; Kinki Univ., Osaka

    1990-01-01

    Auditory ossicular sections were scanned at section thicknesses (mm)/section interspaces (mm) of 1.5/1.5 (61 patients), 1.0/1.0 (13 patients) or 1.5/1.0 (33 patients). At any type of section thickness/interspace, the malleal and incudal structures were observed with almost equal frequency. The region of the incudostapedial joint and each component part of the stapes were shown more frequently at a section interspace of 1.0 mm than at 1.5 mm. The visualization frequency of each auditory ossicular component on two or more serial sections was investigated. At a section thickness/section interspace of 1.5/1.5, the visualization rates were low except for large components such as the head of the malleus and the body of the incus, but at a slice interspace of 1.0 mm, they were high for most components of the auditory ossicles. (orig.)

  2. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    Science.gov (United States)

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  3. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  4. Regional cerebral blood flow measurement in brain tumors

    International Nuclear Information System (INIS)

    Izunaga, Hiroshi; Hirota, Yoshihisa; Takahashi, Mutsumasa; Fuwa, Isao; Kodama, Takafumi; Matsukado, Yasuhiko

    1986-01-01

    The regional cerebral blood flow (CBF) was determined on seventeen patients with brain tumors. Ring type single photon emission CT (SPECT) was used following intravenous injection of 133 Xe. Case materials included eleven meningiomas and six malignant gliomas. Evaluation was performed with emphasis on the following points; 1. Correlation of the flow data within tumors to the angiographic tumor stains, 2. Influence of tumors on the cerebral blood flow of the normal brain tissue, 3. Correlation between degree of peripheral edema and the flow data of the affected hemispheres. There was significant correlation between flow data within tumors and angiographic tumor stains in meningiomas. Influence of tumors on cerebral blood flow of the normal tissue was greater in meningiomas than in gliomas. There was negative correlation between the degree of peripheral edema and the flow data of the affected hemisphere. It has been concluded that the measurement of CBF in brain tumors is a valuable method in evaluation of brain tumors. (author)

  5. Regional cerebral blood flow measurement in brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Izunaga, Hiroshi; Hirota, Yoshihisa; Takahashi, Mutsumasa; Fuwa, Isao; Kodama, Takafumi; Matsukado, Yasuhiko

    1986-10-01

    The regional cerebral blood flow (CBF) was determined on seventeen patients with brain tumors. Ring type single photon emission CT (SPECT) was used following intravenous injection of /sup 133/Xe. Case materials included eleven meningiomas and six malignant gliomas. Evaluation was performed with emphasis on the following points; 1. Correlation of the flow data within tumors to the angiographic tumor stains, 2. Influence of tumors on the cerebral blood flow of the normal brain tissue, 3. Correlation between degree of peripheral edema and the flow data of the affected hemispheres. There was significant correlation between flow data within tumors and angiographic tumor stains in meningiomas. Influence of tumors on cerebral blood flow of the normal tissue was greater in meningiomas than in gliomas. There was negative correlation between the degree of peripheral edema and the flow data of the affected hemisphere. It has been concluded that the measurement of CBF in brain tumors is a valuable method in evaluation of brain tumors.

  6. Moral values are associated with individual differences in regional brain volume.

    Science.gov (United States)

    Lewis, Gary J; Kanai, Ryota; Bates, Timothy C; Rees, Geraint

    2012-08-01

    Moral sentiment has been hypothesized to reflect evolved adaptations to social living. If so, individual differences in moral values may relate to regional variation in brain structure. We tested this hypothesis in a sample of 70 young, healthy adults examining whether differences on two major dimensions of moral values were significantly associated with regional gray matter volume. The two clusters of moral values assessed were "individualizing" (values of harm/care and fairness) and "binding" (deference to authority, in-group loyalty, and purity/sanctity). Individualizing was positively associated with left dorsomedial pFC volume and negatively associated with bilateral precuneus volume. For binding, a significant positive association was found for bilateral subcallosal gyrus and a trend to significance for the left anterior insula volume. These findings demonstrate that variation in moral sentiment reflects individual differences in brain structure and suggest a biological basis for moral sentiment, distributed across multiple brain regions.

  7. Brain functional network connectivity based on a visual task: visual information processing-related brain regions are significantly activated in the task state

    Directory of Open Access Journals (Sweden)

    Yan-li Yang

    2015-01-01

    Full Text Available It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.

  8. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  9. Effect of heroin-conditioned auditory stimuli on cerebral functional activity in rats

    Energy Technology Data Exchange (ETDEWEB)

    Trusk, T.C.; Stein, E.A.

    1988-08-01

    Cerebral functional activity was measured as changes in distribution of the free fatty acid (1-14C)octanoate in autoradiograms obtained from rats during brief presentation of a tone previously paired to infusions of heroin or saline. Rats were trained in groups of three consisting of one heroin self-administering animal and two animals receiving yoked infusions of heroin or saline. Behavioral experiments in separate groups of rats demonstrated that these training parameters imparts secondary reinforcing properties to the tone for animals self-administering heroin while the tone remains behaviorally neutral in yoked-infusion animals. The optical densities of thirty-seven brain regions were normalized to a relative index for comparisons between groups. Previous pairing of the tone to heroin infusions irrespective of behavior (yoked-heroin vs. yoked-saline groups) produced functional activity changes in fifteen brain areas. In addition, nineteen regional differences in octanoate labeling density were evident when comparison was made between animals previously trained to self-administer heroin to those receiving yoked-heroin infusions, while twelve differences were noted when comparisons were made between the yoked vehicle and self administration group. These functional activity changes are presumed related to the secondary reinforcing capacity of the tone acquired by association with heroin, and may identify neural substrates involved in auditory signalled conditioning of positive reinforcement to opiates.

  10. Effect of heroin-conditioned auditory stimuli on cerebral functional activity in rats

    International Nuclear Information System (INIS)

    Trusk, T.C.; Stein, E.A.

    1988-01-01

    Cerebral functional activity was measured as changes in distribution of the free fatty acid [1-14C]octanoate in autoradiograms obtained from rats during brief presentation of a tone previously paired to infusions of heroin or saline. Rats were trained in groups of three consisting of one heroin self-administering animal and two animals receiving yoked infusions of heroin or saline. Behavioral experiments in separate groups of rats demonstrated that these training parameters imparts secondary reinforcing properties to the tone for animals self-administering heroin while the tone remains behaviorally neutral in yoked-infusion animals. The optical densities of thirty-seven brain regions were normalized to a relative index for comparisons between groups. Previous pairing of the tone to heroin infusions irrespective of behavior (yoked-heroin vs. yoked-saline groups) produced functional activity changes in fifteen brain areas. In addition, nineteen regional differences in octanoate labeling density were evident when comparison was made between animals previously trained to self-administer heroin to those receiving yoked-heroin infusions, while twelve differences were noted when comparisons were made between the yoked vehicle and self administration group. These functional activity changes are presumed related to the secondary reinforcing capacity of the tone acquired by association with heroin, and may identify neural substrates involved in auditory signalled conditioning of positive reinforcement to opiates

  11. On the same wavelength: predictable language enhances speaker-listener brain-to-brain synchrony in posterior superior temporal gyrus.

    Science.gov (United States)

    Dikker, Suzanne; Silbert, Lauren J; Hasson, Uri; Zevin, Jason D

    2014-04-30

    Recent research has shown that the degree to which speakers and listeners exhibit similar brain activity patterns during human linguistic interaction is correlated with communicative success. Here, we used an intersubject correlation approach in fMRI to test the hypothesis that a listener's ability to predict a speaker's utterance increases such neural coupling between speakers and listeners. Nine subjects listened to recordings of a speaker describing visual scenes that varied in the degree to which they permitted specific linguistic predictions. In line with our hypothesis, the temporal profile of listeners' brain activity was significantly more synchronous with the speaker's brain activity for highly predictive contexts in left posterior superior temporal gyrus (pSTG), an area previously associated with predictive auditory language processing. In this region, predictability differentially affected the temporal profiles of brain responses in the speaker and listeners respectively, in turn affecting correlated activity between the two: whereas pSTG activation increased with predictability in the speaker, listeners' pSTG activity instead decreased for more predictable sentences. Listeners additionally showed stronger BOLD responses for predictive images before sentence onset, suggesting that highly predictable contexts lead comprehenders to preactivate predicted words.

  12. Attentional Performance is Correlated with the Local Regional Efficiency of Intrinsic Brain Networks

    Directory of Open Access Journals (Sweden)

    Junhai eXu

    2015-07-01

    Full Text Available Attention is a crucial brain function for human beings. Using neuropsychological paradigms and task-based functional brain imaging, previous studies have indicated that widely distributed brain regions are engaged in three distinct attention subsystems: alerting, orienting and executive control (EC. Here, we explored the potential contribution of spontaneous brain activity to attention by examining whether resting-state activity could account for individual differences of the attentional performance in normal individuals. The resting-state functional images and behavioral data from attention network test (ANT task were collected in 59 healthy subjects. Graph analysis was conducted to obtain the characteristics of functional brain networks and linear regression analyses were used to explore their relationships with behavioral performances of the three attentional components. We found that there was no significant relationship between the attentional performance and the global measures, while the attentional performance was associated with specific local regional efficiency. These regions related to the scores of alerting, orienting and EC largely overlapped with the regions activated in previous task-related functional imaging studies, and were consistent with the intrinsic dorsal and ventral attention networks (DAN/VAN. In addition, the strong associations between the attentional performance and specific regional efficiency suggested that there was a possible relationship between the DAN/VAN and task performances in the ANT. We concluded that the intrinsic activity of the human brain could reflect the processing efficiency of the attention system. Our findings revealed a robust evidence for the functional significance of the efficiently organized intrinsic brain network for highly productive cognitions and the hypothesized role of the DAN/ VAN at rest.

  13. Altered regional homogeneity of spontaneous brain activity in idiopathic trigeminal neuralgia.

    Science.gov (United States)

    Wang, Yanping; Zhang, Xiaoling; Guan, Qiaobing; Wan, Lihong; Yi, Yahui; Liu, Chun-Feng

    2015-01-01

    The pathophysiology of idiopathic trigeminal neuralgia (ITN) has conventionally been thought to be induced by neurovascular compression theory. Recent structural brain imaging evidence has suggested an additional central component for ITN pathophysiology. However, far less attention has been given to investigations of the basis of abnormal resting-state brain activity in these patients. The objective of this study was to investigate local brain activity in patients with ITN and its correlation with clinical variables of pain. Resting-state functional magnetic resonance imaging data from 17 patients with ITN and 19 age- and sex-matched healthy controls were analyzed using regional homogeneity (ReHo) analysis, which is a data-driven approach used to measure the regional synchronization of spontaneous brain activity. Patients with ITN had decreased ReHo in the left amygdala, right parahippocampal gyrus, and left cerebellum and increased ReHo in the right inferior temporal gyrus, right thalamus, right inferior parietal lobule, and left postcentral gyrus (corrected). Furthermore, the increase in ReHo in the left precentral gyrus was positively correlated with visual analog scale (r=0.54; P=0.002). Our study found abnormal functional homogeneity of intrinsic brain activity in several regions in ITN, suggesting the maladaptivity of the process of daily pain attacks and a central role for the pathophysiology of ITN.

  14. Evidence of key tinnitus-related brain regions documented by a unique combination of manganese-enhanced MRI and acoustic startle reflex testing.

    Directory of Open Access Journals (Sweden)

    Avril Genene Holt

    2010-12-01

    Full Text Available Animal models continue to improve our understanding of tinnitus pathogenesis and aid in development of new treatments. However, there are no diagnostic biomarkers for tinnitus-related pathophysiology for use in awake, freely moving animals. To address this disparity, two complementary methods were combined to examine reliable tinnitus models (rats repeatedly administered salicylate or exposed to a single noise event: inhibition of acoustic startle and manganese-enhanced MRI. Salicylate-induced tinnitus resulted in wide spread supernormal manganese uptake compared to noise-induced tinnitus. Neither model demonstrated significant differences in the auditory cortex. Only in the dorsal cortex of the inferior colliculus (DCIC did both models exhibit supernormal uptake. Therefore, abnormal membrane depolarization in the DCIC appears to be important in tinnitus-mediated activity. Our results provide the foundation for future studies correlating the severity and longevity of tinnitus with hearing loss and neuronal activity in specific brain regions and tools for evaluating treatment efficacy across paradigms.

  15. Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome

    Science.gov (United States)

    Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.

    2010-01-01

    Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…

  16. Resting-state functional connectivity in medication-naïve schizophrenia patients with and without auditory verbal hallucinations : A preliminary report

    NARCIS (Netherlands)

    Chang, Xiao; Collin, Guusje; Xi, Yibin; Cui, Longbiao; Scholtens, Lianne H.; Sommer, Iris E.; Wang, Huaning; Yin, Hong; Kahn, René S.; van den Heuvel, Martijn P.

    2017-01-01

    Auditory verbal hallucinations (AVH) are a cardinal feature of schizophrenia that has been associated with activation in language processing areas, in concert with higher-order cognitive brain networks. It remains to be determined whether, and if so how, the functional dynamics between these brain

  17. Decoding the auditory brain with canonical component analysis.

    Science.gov (United States)

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund

    2018-05-15

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Regional homogeneity of resting-state brain abnormalities in bipolar and unipolar depression.

    Science.gov (United States)

    Liu, Chun-Hong; Ma, Xin; Wu, Xia; Zhang, Yu; Zhou, Fu-Chun; Li, Feng; Tie, Chang-Le; Dong, Jie; Wang, Yong-Jun; Yang, Zhi; Wang, Chuan-Yue

    2013-03-05

    Bipolar disorder patients experiencing a depressive episode (BD-dep) without an observed history of mania are often misdiagnosed and are consequently treated as having unipolar depression (UD), leading to inadequate treatment and poor outcomes. An essential solution to this problem is to identify objective biological markers that distinguish BD-dep and UD patients at an early stage. However, studies directly comparing the brain dysfunctions associated with BD-dep and UD are rare. More importantly, the specificity of the differences in brain activity between these mental disorders has not been examined. With whole-brain regional homogeneity analysis and region-of-interest (ROI) based receiver operating characteristic (ROC) analysis, we aimed to compare the resting-state brain activity of BD-dep and UD patients. Furthermore, we examined the specific differences and whether these differences were attributed to the brain abnormality caused by BD-dep, UD, or both. Twenty-one bipolar and 21 unipolar depressed patients, as well as 26 healthy subjects matched for gender, age, and educational levels, participated in the study. We compared the differences in the regional homogeneity (ReHo) of the BD-dep and UD groups and further identified their pathophysiological abnormality. In the brain regions showing a difference between the BD-dep and UD groups, we further conducted receptive operation characteristic (ROC) analyses to confirm the effectiveness of the identified difference in classifying the patients. We observed ReHo differences between the BD-dep and UD groups in the right ventrolateral middle frontal gyrus, right dorsal anterior insular, right ventral anterior insular, right cerebellum posterior gyrus, right posterior cingulate cortex, right parahippocampal gyrus, and left cerebellum anterior gyrus. Further ROI comparisons and ROC analysis on these ROIs showed that the right parahippocampal gyrus reflected abnormality specific to the BD-dep group, while the right

  19. Biomimetic Sonar for Electrical Activation of the Auditory Pathway

    Directory of Open Access Journals (Sweden)

    D. Menniti

    2017-01-01

    Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.

  20. Neuroscience illuminating the influence of auditory or phonological intervention on language-related deficits

    Directory of Open Access Journals (Sweden)

    Sari eYlinen

    2015-02-01

    Full Text Available Remediation programs for language-related learning deficits are urgently needed to enable equal opportunities in education. To meet this need, different training and intervention programs have been developed. Here we review, from an educational perspective, studies that have explored the neural basis of behavioral changes induced by auditory or phonological training in dyslexia, specific language impairment (SLI, and language-learning impairment (LLI. Training has been shown to induce plastic changes in deficient neural networks. In dyslexia, these include, most consistently, increased or normalized activation of previously hypoactive inferior frontal and occipito-temporal areas. In SLI and LLI, studies have shown the strengthening of previously weak auditory brain responses as a result of training. The combination of behavioral and brain measures of remedial gains has potential to increase the understanding of the causes of language-related deficits, which may help to target remedial interventions more accurately to the core problem.