WorldWideScience

Sample records for human auditory cortical

  1. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  2. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  3. Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding FMRI.

    Directory of Open Access Journals (Sweden)

    Ella Striem-Amit

    Full Text Available The primary sensory cortices are characterized by a topographical mapping of basic sensory features which is considered to deteriorate in higher-order areas in favor of complex sensory features. Recently, however, retinotopic maps were also discovered in the higher-order visual, parietal and prefrontal cortices. The discovery of these maps enabled the distinction between visual regions, clarified their function and hierarchical processing. Could such extension of topographical mapping to high-order processing regions apply to the auditory modality as well? This question has been studied previously in animal models but only sporadically in humans, whose anatomical and functional organization may differ from that of animals (e.g. unique verbal functions and Heschl's gyrus curvature. Here we applied fMRI spectral analysis to investigate the cochleotopic organization of the human cerebral cortex. We found multiple mirror-symmetric novel cochleotopic maps covering most of the core and high-order human auditory cortex, including regions considered non-cochleotopic, stretching all the way to the superior temporal sulcus. These maps suggest that topographical mapping persists well beyond the auditory core and belt, and that the mirror-symmetry of topographical preferences may be a fundamental principle across sensory modalities.

  4. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  5. Visual-induced expectations modulate auditory cortical responses

    Directory of Open Access Journals (Sweden)

    Virginie evan Wassenhove

    2015-02-01

    Full Text Available Active sensing has important consequences on multisensory processing (Schroeder et al. 2010. Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient colour changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the where and the when of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG while maintaining the position of their eyes on the left, right, or centre of the screen. Participants counted colour changes of the fixation cross while neglecting sounds which could be presented to the left, right or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants’ attention directed to visual inputs. Second, colour changes elicited robust modulations of auditory cortex responses (when prediction seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of when a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that where predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.

  6. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...

  7. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  8. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  9. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  10. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  11. Humans mimicking animals: A cortical hierarchy for human vocal communication sounds

    Science.gov (United States)

    Talkington, William J.; Rapuano, Kristina M.; Hitt, Laura; Frum, Chris A.; Lewis, James W.

    2012-01-01

    Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STS) putatively represent homologous voice-sensitive areas of cortex. However, STS regions have recently been reported to represent auditory experience or “expertise” in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations – human-mimicked versions of animal vocalizations – we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. These results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages prior to the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins. PMID:22674283

  12. Acquisition, Analyses and Interpretation of fMRI Data: A Study on the Effective Connectivity in Human Primary Auditory Cortices

    International Nuclear Information System (INIS)

    Ahmad Nazlim Yusoff; Mazlyfarina Mohamad; Khairiah Abdul Hamid

    2011-01-01

    A study on the effective connectivity characteristics in auditory cortices was conducted on five healthy Malay male subjects with the age of 20 to 40 years old using functional magnetic resonance imaging (fMRI), statistical parametric mapping (SPM5) and dynamic causal modelling (DCM). A silent imaging paradigm was used to reduce the scanner sound artefacts on functional images. The subjects were instructed to pay attention to the white noise stimulus binaurally given at intensity level of 70 dB higher than the hearing level for normal people. Functional specialisation was studied using Matlab-based SPM5 software by means of fixed effects (FFX), random effects (RFX) and conjunction analyses. Individual analyses on all subjects indicate asymmetrical bilateral activation between the left and right auditory cortices in Brodmann areas (BA)22, 41 and 42 involving the primary and secondary auditory cortices. The three auditory areas in the right and left auditory cortices are selected for the determination of the effective connectivity by constructing 9 network models. The effective connectivity is determined on four out of five subjects with the exception of one subject who has the BA22 coordinates located too far from BA22 coordinates obtained from group analysis. DCM results showed the existence of effective connectivity between the three selected auditory areas in both auditory cortices. In the right auditory cortex, BA42 is identified as input centre with unidirectional parallel effective connectivities of BA42→BA41and BA42→BA22. However, for the left auditory cortex, the input is BA41 with unidirectional parallel effective connectivities of BA41→BA42 and BA41→BA22. The connectivity between the activated auditory areas suggests the existence of signal pathway in the auditory cortices even when the subject is listening to noise. (author)

  13. Auditory cortical and hippocampal-system mismatch responses to duration deviants in urethane-anesthetized rats.

    Directory of Open Access Journals (Sweden)

    Timo Ruusuvirta

    Full Text Available Any change in the invariant aspects of the auditory environment is of potential importance. The human brain preattentively or automatically detects such changes. The mismatch negativity (MMN of event-related potentials (ERPs reflects this initial stage of auditory change detection. The origin of MMN is held to be cortical. The hippocampus is associated with a later generated P3a of ERPs reflecting involuntarily attention switches towards auditory changes that are high in magnitude. The evidence for this cortico-hippocampal dichotomy is scarce, however. To shed further light on this issue, auditory cortical and hippocampal-system (CA1, dentate gyrus, subiculum local-field potentials were recorded in urethane-anesthetized rats. A rare tone in duration (deviant was interspersed with a repeated tone (standard. Two standard-to-standard (SSI and standard-to-deviant (SDI intervals (200 ms vs. 500 ms were applied in different combinations to vary the observability of responses resembling MMN (mismatch responses. Mismatch responses were observed at 51.5-89 ms with the 500-ms SSI coupled with the 200-ms SDI but not with the three remaining combinations. Most importantly, the responses appeared in both the auditory-cortical and hippocampal locations. The findings suggest that the hippocampus may play a role in (cortical manifestation of MMN.

  14. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  15. Spectrotemporal dynamics of auditory cortical synaptic receptive field plasticity.

    Science.gov (United States)

    Froemke, Robert C; Martins, Ana Raquel O

    2011-09-01

    The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  17. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  18. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a

  19. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  20. Membrane potential dynamics of populations of cortical neurons during auditory streaming

    Science.gov (United States)

    Farley, Brandon J.

    2015-01-01

    How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts. PMID:26269558

  1. Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity.

    Science.gov (United States)

    Miller, Lee M; Recanzone, Gregg H

    2009-04-07

    The auditory cortex is critical for perceiving a sound's location. However, there is no topographic representation of acoustic space, and individual auditory cortical neurons are often broadly tuned to stimulus location. It thus remains unclear how acoustic space is represented in the mammalian cerebral cortex and how it could contribute to sound localization. This report tests whether the firing rates of populations of neurons in different auditory cortical fields in the macaque monkey carry sufficient information to account for horizontal sound localization ability. We applied an optimal neural decoding technique, based on maximum likelihood estimation, to populations of neurons from 6 different cortical fields encompassing core and belt areas. We found that the firing rate of neurons in the caudolateral area contain enough information to account for sound localization ability, but neurons in other tested core and belt cortical areas do not. These results provide a detailed and plausible population model of how acoustic space could be represented in the primate cerebral cortex and support a dual stream processing model of auditory cortical processing.

  2. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  3. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Auditory cortical volumes and musical ability in Williams syndrome.

    Science.gov (United States)

    Martens, Marilee A; Reutens, David C; Wilson, Sarah J

    2010-07-01

    Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    Science.gov (United States)

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  6. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  7. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  8. Intracerebral evidence of rhythm transform in the human auditory cortex.

    Science.gov (United States)

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  9. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  10. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    Science.gov (United States)

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  11. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    Science.gov (United States)

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy

  12. Effects of parietal TMS on visual and auditory processing at the primary cortical level -- a concurrent TMS-fMRI study

    DEFF Research Database (Denmark)

    Leitão, Joana; Thielscher, Axel; Werner, Sebastian

    2013-01-01

    cortices under 3 sensory contexts: visual, auditory, and no stimulation. IPS-TMS increased activations in auditory cortices irrespective of sensory context as a result of direct and nonspecific auditory TMS side effects. In contrast, IPS-TMS modulated activations in the visual cortex in a state...... deactivations induced by auditory activity to TMS sounds. TMS to IPS may increase the responses in visual (or auditory) cortices to visual (or auditory) stimulation via a gain control mechanism or crossmodal interactions. Collectively, our results demonstrate that understanding TMS effects on (uni......Accumulating evidence suggests that multisensory interactions emerge already at the primary cortical level. Specifically, auditory inputs were shown to suppress activations in visual cortices when presented alone but amplify the blood oxygen level-dependent (BOLD) responses to concurrent visual...

  13. Real-time classification of auditory sentences using evoked cortical activity in humans

    Science.gov (United States)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  14. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    OpenAIRE

    Matusz, P.J.; Thelen, A.; Amrein, S.; Geiser, E.; Anken, J.; Murray, M.M.

    2015-01-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a ...

  15. Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.

    Science.gov (United States)

    Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G

    2016-02-01

    Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Hearing with Two Ears: Evidence for Cortical Binaural Interaction during Auditory Processing.

    Science.gov (United States)

    Henkin, Yael; Yaar-Soffer, Yifat; Givon, Lihi; Hildesheimer, Minka

    2015-04-01

    Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Fifteen (21-32 yr) young adults (6 females) with normal hearing sensitivity. By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of

  17. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  18. Syllabic discrimination in premature human infants prior to complete formation of cortical layers

    OpenAIRE

    Mahmoudzadeh, Mahdi; Dehaene-Lambertz, Ghislaine; Fournier, Marc; Kongolo, Guy; Goudjil, Sabrina; Dubois, Jessica; Grebe, Reinhard; Wallois, Fabrice

    2013-01-01

    The ontogeny of linguistic functions in the human brain remains elusive. Although some auditory capacities are described before term, whether and how such immature cortical circuits might process speech are unknown. Here we used functional optical imaging to evaluate the cerebral responses to syllables at the earliest age at which cortical responses to external stimuli can be recorded in humans (28- to 32-wk gestational age). At this age, the cortical organization in layers is not completed. ...

  19. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    Science.gov (United States)

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  20. Assessment of auditory cortical function in cochlear implant patients using 15O PET

    International Nuclear Information System (INIS)

    Young, J.P.; O'Sullivan, B.T.; Gibson, W.P.; Sefton, A.E.; Mitchell, T.E.; Sanli, H.; Cervantes, R.; Withall, A.; Royal Prince Alfred Hospital, Sydney,

    1998-01-01

    Full text: Cochlear implantation has been an extraordinarily successful method of restoring hearing and the potential for full language development in pre-lingually and post-lingually deaf individuals (Gibson 1996). Post-lingually deaf patients, who develop their hearing loss later in life, respond best to cochlear implantation within the first few years of their deafness, but are less responsive to implantation after several years of deafness (Gibson 1996). In pre-lingually deaf children, cochlear implantation is most effect in allowing the full development language skills when performed within a critical period, in the first 8 years of life. These clinical observations suggest considerable neural plasticity of the human auditory cortex in acquiring and retaining language skills (Gibson 1996, Buchwald 1990). Currently, electrocochleography is used to determine the integrity of the auditory pathways to the auditory cortex. However, the functional integrity of the auditory cortex cannot be determined by this method. We have defined the extent of activation of the auditory cortex and auditory association cortex in 6 normal controls and 6 cochlear implant patients using 15 O PET functional brain imaging methods. Preliminary results have indicated the potential clinical utility of 15 O PET cortical mapping in the pre-surgical assessment and post-surgical follow up of cochlear implant patients. Copyright (1998) Australian Neuroscience Society

  1. Prepulse Inhibition of Auditory Cortical Responses in the Caudolateral Superior Temporal Gyrus in Macaca mulatta.

    Science.gov (United States)

    Chen, Zuyue; Parkkonen, Lauri; Wei, Jingkuan; Dong, Jin-Run; Ma, Yuanye; Carlson, Synnöve

    2018-04-01

    Prepulse inhibition (PPI) refers to a decreased response to a startling stimulus when another weaker stimulus precedes it. Most PPI studies have focused on the physiological startle reflex and fewer have reported the PPI of cortical responses. We recorded local field potentials (LFPs) in four monkeys and investigated whether the PPI of auditory cortical responses (alpha, beta, and gamma oscillations and evoked potentials) can be demonstrated in the caudolateral belt of the superior temporal gyrus (STGcb). We also investigated whether the presence of a conspecific, which draws attention away from the auditory stimuli, affects the PPI of auditory cortical responses. The PPI paradigm consisted of Pulse-only and Prepulse + Pulse trials that were presented randomly while the monkey was alone (ALONE) and while another monkey was present in the same room (ACCOMP). The LFPs to the Pulse were significantly suppressed by the Prepulse thus, demonstrating PPI of cortical responses in the STGcb. The PPI-related inhibition of the N1 amplitude of the evoked responses and cortical oscillations to the Pulse were not affected by the presence of a conspecific. In contrast, gamma oscillations and the amplitude of the N1 response to Pulse-only were suppressed in the ACCOMP condition compared to the ALONE condition. These findings demonstrate PPI in the monkey STGcb and suggest that the PPI of auditory cortical responses in the monkey STGcb is a pre-attentive inhibitory process that is independent of attentional modulation.

  2. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey.

    Science.gov (United States)

    Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-01-01

    In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  4. Towards an optimal paradigm for simultaneously recording cortical and brainstem auditory evoked potentials.

    Science.gov (United States)

    Bidelman, Gavin M

    2015-02-15

    Simultaneous recording of brainstem and cortical event-related brain potentials (ERPs) may offer a valuable tool for understanding the early neural transcription of behaviorally relevant sounds and the hierarchy of signal processing operating at multiple levels of the auditory system. To date, dual recordings have been challenged by technological and physiological limitations including different optimal parameters necessary to elicit each class of ERP (e.g., differential adaptation/habitation effects and number of trials to obtain adequate response signal-to-noise ratio). We investigated a new stimulus paradigm for concurrent recording of the auditory brainstem frequency-following response (FFR) and cortical ERPs. The paradigm is "optimal" in that it uses a clustered stimulus presentation and variable interstimulus interval (ISI) to (i) achieve the most ideal acquisition parameters for eliciting subcortical and cortical responses, (ii) obtain an adequate number of trials to detect each class of response, and (iii) minimize neural adaptation/habituation effects. Comparison between clustered and traditional (fixed, slow ISI) stimulus paradigms revealed minimal change in amplitude or latencies of either the brainstem FFR or cortical ERP. The clustered paradigm offered over a 3× increase in recording efficiency compared to conventional (fixed ISI presentation) and thus, a more rapid protocol for obtaining dual brainstem-cortical recordings in individual listeners. We infer that faster recording of subcortical and cortical potentials might allow more complete and sensitive testing of neurophysiological function and aid in the differential assessment of auditory function. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Magnetoencephalographic Imaging of Auditory and Somatosensory Cortical Responses in Children with Autism and Sensory Processing Dysfunction

    Directory of Open Access Journals (Sweden)

    Carly Demopoulos

    2017-05-01

    Full Text Available This study compared magnetoencephalographic (MEG imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18, those with sensory processing dysfunction (SPD; N = 13 who do not meet ASD criteria, and typically developing control (TDC; N = 19 participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain.

  6. Behavioral lifetime of human auditory sensory memory predicted by physiological measures.

    Science.gov (United States)

    Lu, Z L; Williamson, S J; Kaufman, L

    1992-12-04

    Noninvasive magnetoencephalography makes it possible to identify the cortical area in the human brain whose activity reflects the decay of passive sensory storage of information about auditory stimuli (echoic memory). The lifetime for decay of the neuronal activation trace in primary auditory cortex was found to predict the psychophysically determined duration of memory for the loudness of a tone. Although memory for the loudness of a specific tone is lost, the remembered loudness decays toward the global mean of all of the loudnesses to which a subject is exposed in a series of trials.

  7. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  8. Noninvasive scalp recording of cortical auditory evoked potentials in the alert macaque monkey.

    Science.gov (United States)

    Itoh, Kosuke; Nejime, Masafumi; Konoike, Naho; Nakada, Tsutomu; Nakamura, Katsuki

    2015-09-01

    Scalp-recorded evoked potentials (EP) provide researchers and clinicians with irreplaceable means for recording stimulus-related neural activities in the human brain, due to its high temporal resolution, handiness, and, perhaps more importantly, non-invasiveness. This work recorded the scalp cortical auditory EP (CAEP) in unanesthetized monkeys by using methods that are essentially identical to those applied to humans. Young adult rhesus monkeys (Macaca mulatta, 5-7 years old) were seated in a monkey chair, and their head movements were partially restricted by polystyrene blocks and tension poles placed around their head. Individual electrodes were fixated on their scalp using collodion according to the 10-20 system. Pure tone stimuli were presented while electroencephalograms were recorded from up to nineteen channels, including an electrooculogram channel. In all monkeys (n = 3), the recorded CAEP comprised a series of positive and negative deflections, labeled here as macaque P1 (mP1), macaque N1 (mN1), macaque P2 (mP2), and macaque N2 (mN2), and these transient responses to sound onset were followed by a sustained potential that continued for the duration of the sound, labeled the macaque sustained potential (mSP). mP1, mN2 and mSP were the prominent responses, and they had maximal amplitudes over frontal/central midline electrode sites, consistent with generators in auditory cortices. The study represents the first noninvasive scalp recording of CAEP in alert rhesus monkeys, to our knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    Science.gov (United States)

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  10. Visual Processing Recruits the Auditory Cortices in Prelingually Deaf Children and Influences Cochlear Implant Outcomes.

    Science.gov (United States)

    Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-09-01

    Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.

  11. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  12. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W.J.; Willemsen, Antoon T.M.

    2007-01-01

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  13. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  14. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex.

    Science.gov (United States)

    Sussman, Elyse; Steinschneider, Mitchell

    2006-02-23

    Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.

  15. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    Science.gov (United States)

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  16. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  17. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  18. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  19. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  20. Knowledge about Sounds – Context-Specific Meaning Differently Activates Cortical Hemispheres, Auditory Cortical Fields and Layers in House Mice

    Directory of Open Access Journals (Sweden)

    Diana B. Geissler

    2016-03-01

    Full Text Available Activation of the auditory cortex (AC by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF, the ultrasonic field (UF, the secondary field (AII, and the dorsoposterior field (DP suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers and brains which acquired knowledge via implicit learning (naïve females. In this way, auditory cortical activation discriminates between instinctive (mothers and learned (naïve females cognition.

  1. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection

    Directory of Open Access Journals (Sweden)

    Alessandra Spada Durante

    Full Text Available Abstract Introduction: The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. Objective: To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. Methods: The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group; and 31 adults with normal hearing (control group. An automated system of detection, analysis, and recording of cortical responses (HEARLab® was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000 Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing threshold (BT. The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. Results: The cortical

  2. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    Science.gov (United States)

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the

  3. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  4. Achilles’ Ear? Inferior Human Short-Term and Recognition Memory in the Auditory Modality

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects’ retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1–4 s). However, at longer retention intervals (8–32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices. PMID:24587119

  5. Cortical potentials in an auditory oddball task reflect individual differences in working memory capacity.

    Science.gov (United States)

    Yurgil, Kate A; Golob, Edward J

    2013-12-01

    This study determined whether auditory cortical responses associated with mechanisms of attention vary with individual differences in working memory capacity (WMC) and perceptual load. The operation span test defined subjects with low versus high WMC, who then discriminated target/nontarget tones while EEG was recorded. Infrequent white noise distracters were presented at midline or ±90° locations, and perceptual load was manipulated by varying nontarget frequency. Amplitude of the N100 to distracters was negatively correlated with WMC. Relative to targets, only high WMC subjects showed attenuated N100 amplitudes to nontargets. In the higher WMC group, increased perceptual load was associated with decreased P3a amplitudes to distracters and longer-lasting negative slow wave to nontargets. Results show that auditory cortical processing is associated with multiple facets of attention related to WMC and possibly higher-level cognition. Copyright © 2013 Society for Psychophysiological Research.

  6. Crossmodal plasticity in auditory, visual and multisensory cortical areas following noise-induced hearing loss in adulthood.

    Science.gov (United States)

    Schormans, Ashley L; Typlt, Marei; Allman, Brian L

    2017-01-01

    Complete or partial hearing loss results in an increased responsiveness of neurons in the core auditory cortex of numerous species to visual and/or tactile stimuli (i.e., crossmodal plasticity). At present, however, it remains uncertain how adult-onset partial hearing loss affects higher-order cortical areas that normally integrate audiovisual information. To that end, extracellular electrophysiological recordings were performed under anesthesia in noise-exposed rats two weeks post-exposure (0.8-20 kHz at 120 dB SPL for 2 h) and age-matched controls to characterize the nature and extent of crossmodal plasticity in the dorsal auditory cortex (AuD), an area outside of the auditory core, as well as in the neighboring lateral extrastriate visual cortex (V2L), an area known to contribute to audiovisual processing. Computer-generated auditory (noise burst), visual (light flash) and combined audiovisual stimuli were delivered, and the associated spiking activity was used to determine the response profile of each neuron sampled (i.e., unisensory, subthreshold multisensory or bimodal). In both the AuD cortex and the multisensory zone of the V2L cortex, the maximum firing rates were unchanged following noise exposure, and there was a relative increase in the proportion of neurons responsive to visual stimuli, with a concomitant decrease in the number of neurons that were solely responsive to auditory stimuli despite adjusting the sound intensity to account for each rat's hearing threshold. These neighboring cortical areas differed, however, in how noise-induced hearing loss affected audiovisual processing; the total proportion of multisensory neurons significantly decreased in the V2L cortex (control 38.8 ± 3.3% vs. noise-exposed 27.1 ± 3.4%), and dramatically increased in the AuD cortex (control 23.9 ± 3.3% vs. noise-exposed 49.8 ± 6.1%). Thus, following noise exposure, the cortical area showing the greatest relative degree of multisensory convergence

  7. Plasticity in the Primary Auditory Cortex, Not What You Think it is: Implications for Basic and Clinical Auditory Neuroscience

    Science.gov (United States)

    Weinberger, Norman M.

    2013-01-01

    Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375

  8. c-Fos and Arc/Arg3.1 expression in auditory and visual cortices after hearing loss: Evidence of sensory crossmodal reorganization in adult rats.

    Science.gov (United States)

    Pernia, M; Estevez, S; Poveda, C; Plaza, I; Carro, J; Juiz, J M; Merchan, M A

    2017-08-15

    Cross-modal reorganization in the auditory and visual cortices has been reported after hearing and visual deficits mostly during the developmental period, possibly underlying sensory compensation mechanisms. However, there are very few data on the existence or nature and timeline of such reorganization events during sensory deficits in adulthood. In this study, we assessed long-term changes in activity-dependent immediate early genes c-Fos and Arc/Arg3.1 in auditory and neighboring visual cortical areas after bilateral deafness in young adult rats. Specifically, we analyzed qualitatively and quantitatively c-Fos and Arc/Arg3.1 immunoreactivity at 15 and 90 days after cochlea removal. We report extensive, global loss of c-Fos and Arc/Arg3.1 immunoreactive neurons in the auditory cortex 15 days after permanent auditory deprivation in adult rats, which is partly reversed 90 days after deafness. Simultaneously, the number and labeling intensity of c-Fos- and Arc/Arg3.1-immunoreactive neurons progressively increase in neighboring visual cortical areas from 2 weeks after deafness and these changes stabilize three months after inducing the cochlear lesion. These findings support plastic, compensatory, long-term changes in activity in the auditory and visual cortices after auditory deprivation in the adult rats. Further studies may clarify whether those changes result in perceptual potentiation of visual drives on auditory regions of the adult cortex. © 2017 The Authors The Journal of Comparative Neurology Published by Wiley Periodicals, Inc.

  9. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  10. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  11. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  12. Comparing Intrinsic Connectivity Models for the Primary Auditory Cortices

    Science.gov (United States)

    Hamid, Khairiah Abdul; Yusoff, Ahmad Nazlim; Mohamad, Mazlyfarina; Hamid, Aini Ismafairus Abd; Manan, Hanani Abd

    2010-07-01

    This fMRI study is about modeling the intrinsic connectivity between Heschl' gyrus (HG) and superior temporal gyrus (STG) in human primary auditory cortices. Ten healthy male subjects participated and required to listen to white noise stimulus during the fMRI scans. Two intrinsic connectivity models comprising bilateral HG and STG were constructed using statistical parametric mapping (SPM) and dynamic causal modeling (DCM). Group Bayes factor (GBF), positive evidence ratio (PER) and Bayesian model selection (BMS) for group studies were used in model comparison. Group results indicated significant bilateral asymmetrical activation (puncorr < 0.001) in HG and STG. Comparison results showed strong evidence of Model 2 as the preferred model (STG as the input center) with GBF value of 5.77 × 1073 The model is preferred by 6 out of 10 subjects. The results were supported by BMS results for group studies. One-sample t-test on connection values obtained from Model 2 indicates unidirectional parallel connections from STG to bilateral HG (p<0.05). Model 2 was determined to be the most probable intrinsic connectivity model between bilateral HG and STG when listening to white noise.

  13. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Binaural fusion and the representation of virtual pitch in the human auditory cortex.

    Science.gov (United States)

    Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E

    1996-10-01

    The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.

  15. Prepulse inhibition of auditory change-related cortical responses

    Directory of Open Access Journals (Sweden)

    Inui Koji

    2012-10-01

    Full Text Available Abstract Background Prepulse inhibition (PPI of the startle response is an important tool to investigate the biology of schizophrenia. PPI is usually observed by use of a startle reflex such as blinking following an intense sound. A similar phenomenon has not been reported for cortical responses. Results In 12 healthy subjects, change-related cortical activity in response to an abrupt increase of sound pressure by 5 dB above the background of 65 dB SPL (test stimulus was measured using magnetoencephalography. The test stimulus evoked a clear cortical response peaking at around 130 ms (Change-N1m. In Experiment 1, effects of the intensity of a prepulse (0.5 ~ 5 dB on the test response were examined using a paired stimulation paradigm. In Experiment 2, effects of the interval between the prepulse and test stimulus were examined using interstimulus intervals (ISIs of 50 ~ 350 ms. When the test stimulus was preceded by the prepulse, the Change-N1m was more strongly inhibited by a stronger prepulse (Experiment 1 and a shorter ISI prepulse (Experiment 2. In addition, the amplitude of the test Change-N1m correlated positively with both the amplitude of the prepulse-evoked response and the degree of inhibition, suggesting that subjects who are more sensitive to the auditory change are more strongly inhibited by the prepulse. Conclusions Since Change-N1m is easy to measure and control, it would be a valuable tool to investigate mechanisms of sensory gating or the biology of certain mental diseases such as schizophrenia.

  16. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  17. Auditory cortical activation and plasticity after cochlear implantation measured by PET using fluorodeoxyglucose.

    Science.gov (United States)

    Łukaszewicz-Moszyńska, Zuzanna; Lachowska, Magdalena; Niemczyk, Kazimierz

    2014-01-01

    The purpose of this study was to evaluate possible relationships between duration of cochlear implant use and results of positron emission tomography (PET) measurements in the temporal lobes performed while subjects listened to speech stimuli. Other aspects investigated were whether implantation side impacts significantly on cortical representations of functions related to understanding speech (ipsi- or contralateral to the implanted side) and whether any correlation exists between cortical activation and speech therapy results. Objective cortical responses to acoustic stimulation were measured, using PET, in nine cochlear implant patients (age range: 15 to 50 years). All the patients suffered from bilateral deafness, were right-handed, and had no additional neurological deficits. They underwent PET imaging three times: immediately after the first fitting of the speech processor (activation of the cochlear implant), and one and two years later. A tendency towards increasing levels of activation in areas of the primary and secondary auditory cortex on the left side of the brain was observed. There was no clear effect of the side of implantation (left or right) on the degree of cortical activation in the temporal lobe. However, the PET results showed a correlation between degree of cortical activation and speech therapy results.

  18. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  19. State-dependent changes in auditory sensory gating in different cortical areas in rats.

    Directory of Open Access Journals (Sweden)

    Renli Qi

    Full Text Available Sensory gating is a process in which the brain's response to a repetitive stimulus is attenuated; it is thought to contribute to information processing by enabling organisms to filter extraneous sensory inputs from the environment. To date, sensory gating has typically been used to determine whether brain function is impaired, such as in individuals with schizophrenia or addiction. In healthy subjects, sensory gating is sensitive to a subject's behavioral state, such as acute stress and attention. The cortical response to sensory stimulation significantly decreases during sleep; however, information processing continues throughout sleep, and an auditory evoked potential (AEP can be elicited by sound. It is not known whether sensory gating changes during sleep. Sleep is a non-uniform process in the whole brain with regional differences in neural activities. Thus, another question arises concerning whether sensory gating changes are uniform in different brain areas from waking to sleep. To address these questions, we used the sound stimuli of a Conditioning-testing paradigm to examine sensory gating during waking, rapid eye movement (REM sleep and Non-REM (NREM sleep in different cortical areas in rats. We demonstrated the following: 1. Auditory sensory gating was affected by vigilant states in the frontal and parietal areas but not in the occipital areas. 2. Auditory sensory gating decreased in NREM sleep but not REM sleep from waking in the frontal and parietal areas. 3. The decreased sensory gating in the frontal and parietal areas during NREM sleep was the result of a significant increase in the test sound amplitude.

  20. Influência dos contrastes de fala nos potenciais evocados auditivos corticais The influence of speech stimuli contrast in cortical auditory evoked potentials

    Directory of Open Access Journals (Sweden)

    Kátia de Freitas Alvarenga

    2013-06-01

    Full Text Available Estudos voltados aos potenciais evocados auditivos com estímulos de fala em indivíduos ouvintes são importantes para compreender como a complexidade do estímulo influencia nas características do potencial cognitivo auditivo gerado. OBJETIVO: Caracterizar o potencial evocado auditivo cortical e o potencial cognitivo auditivo P3 com estímulos de contrastes vocálico e consonantal em indivíduos com audição normal. MÉTODO: Participaram deste estudo 31 indivíduos sem alterações auditivas, neurológicas e de linguagem na faixa etária de 7 a 30 anos. Os potenciais evocados auditivos corticais e cognitivo auditivo P3 foram registrados nos canais ativos Fz e Cz utilizando-se os contrastes de fala consonantal (/ba/-/da/ e vocálico (/i/-/a/. Desenho: Estudo de coorte, transversal e prospectivo. RESULTADOS: Houve diferença entre o contraste de fala utilizado e as latências dos componentes N2 (p = 0,00 e P3 (p = 0,00, assim como entre o canal ativo considerado (Fz/Cz e os valores de latência e amplitude de P3. Estas diferenças não ocorreram para os componentes exógenos N1 e P2. CONCLUSÃO: O contraste do estímulo de fala, vocálico ou consonantal, deve ser considerado na análise do potencial evocado cortical, componente N2, e do potencial cognitivo auditivo P3.Studies about cortical auditory evoked potentials using the speech stimuli in normal hearing individuals are important for understanding how the complexity of the stimulus influences the characteristics of the cortical potential generated. OBJECTIVE: To characterize the cortical auditory evoked potential and the P3 auditory cognitive potential with the vocalic and consonantal contrast stimuli in normally hearing individuals. METHOD: 31 individuals with no risk for hearing, neurologic and language alterations, in the age range between 7 and 30 years, participated in this study. The cortical auditory evoked potentials and the P3 auditory cognitive one were recorded in the Fz and Cz

  1. Echoic Memory: Investigation of Its Temporal Resolution by Auditory Offset Cortical Responses

    OpenAIRE

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temp...

  2. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  3. Musical Expectations Enhance Auditory Cortical Processing in Musicians: A Magnetoencephalography Study.

    Science.gov (United States)

    Park, Jeong Mi; Chung, Chun Kee; Kim, June Sic; Lee, Kyung Myun; Seol, Jaeho; Yi, Suk Won

    2018-01-15

    The present study investigated the influence of musical expectations on auditory representations in musicians and non-musicians using magnetoencephalography (MEG). Neuroscientific studies have demonstrated that musical syntax is processed in the inferior frontal gyri, eliciting an early right anterior negativity (ERAN), and anatomical evidence has shown that interconnections occur between the frontal cortex and the belt and parabelt regions in the auditory cortex (AC). Therefore, we anticipated that musical expectations would mediate neural activities in the AC via an efferent pathway. To test this hypothesis, we measured the auditory-evoked fields (AEFs) of seven musicians and seven non-musicians while they were listening to a five-chord progression in which the expectancy of the third chord was manipulated (highly expected, less expected, and unexpected). The results revealed that highly expected chords elicited shorter N1m (negative AEF at approximately 100 ms) and P2m (positive AEF at approximately 200 ms) latencies and larger P2m amplitudes in the AC than less-expected and unexpected chords. The relations between P2m amplitudes/latencies and harmonic expectations were similar between the groups; however, musicians' results were more remarkable than those of non-musicians. These findings suggest that auditory cortical processing is enhanced by musical knowledge and long-term training in a top-down manner, which is reflected in shortened N1m and P2m latencies and enhanced P2m amplitudes in the AC. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.

    2016-01-01

    The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory

  5. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  6. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  7. Direct recordings from the auditory cortex in a cochlear implant user.

    Science.gov (United States)

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.

  8. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  9. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  10. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  11. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  12. Using fMRI to Detect Activation of the Cortical and Subcortical Auditory Centers: Development of a Standard Protocol for a Conventional 1.5-T MRI Scanner

    International Nuclear Information System (INIS)

    Tae, Woo Suk; Kim, Sam Soo; Lee, Kang Uk; Lee, Seung Hwan; Nam, Eui Cheol; Choi, Hyun Kyung

    2009-01-01

    We wanted to develop a standard protocol for auditory functional magnetic resonance imaging (fMRI) for detecting blood oxygenation level-dependent (BOLD) responses at the cortical and subcortical auditory centers with using a 1.5-T MRI scanner. Fourteen normal volunteers were enrolled in the study. The subjects were stimulated by four repetitions of 32 sec each with broadband white noise and silent period blocks as a run (34 echo planar images [EPIs]). Multiple regression analysis for the individual analysis and one-sample t-tests for the group analysis were applied (FDR, p <0.05). The auditory cortex was activated in most of the volunteers (left 100% and right 92.9% at an uncorrected p value <0.05, and left 92.9% and right 92.9% at an uncorreced p value <0.01). The cochlear nuclei (100%, 85.7%), inferior colliculi (71.4%, 64.3%), medial geniculate bodies (64.3%, 35.7%) and superior olivary complexes (35.7%, 35.7%) showed significant BOLD responses at uncorrected p values of <0.05 and p <0.01, respectively. On the group analysis, the cortical and subcortical auditory centers showed significant BOLD responses (FDR, p <0.05), except for the superior olivary complex. The signal intensity time courses of the auditory centers showed biphasic wave forms. We successfully visualized BOLD responses at the cortical and subcortical auditory centers using appropriate sound stimuli and an image acquisition method with a 1.5-T MRI scanner

  13. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  14. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  15. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  16. Cortical evoked potentials to an auditory illusion: binaural beats.

    Science.gov (United States)

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi

    2009-08-01

    To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.

  17. Inter-trial coherence as a marker of cortical phase synchrony in children with sensorineural hearing loss and auditory neuropathy spectrum disorder fitted with hearing aids and cochlear implants

    Science.gov (United States)

    Nash-Kille, Amy; Sharma, Anu

    2014-01-01

    Objective Although brainstem dys-synchrony is a hallmark of children with auditory neuropathy spectrum disorder (ANSD), little is known about how the lack of neural synchrony manifests at more central levels. We used time-frequency single-trial EEG analyses (i.e., inter-trial coherence; ITC), to examine cortical phase synchrony in children with normal hearing (NH), sensorineural hearing loss (SNHL) and ANSD. Methods Single trial time-frequency analyses were performed on cortical auditory evoked responses from 41 NH children, 91 children with ANSD and 50 children with SNHL. The latter two groups included children who received intervention via hearing aids and cochlear implants. ITC measures were compared between groups as a function of hearing loss, intervention type, and cortical maturational status. Results In children with SNHL, ITC decreased as severity of hearing loss increased. Children with ANSD revealed lower levels of ITC relative to children with NH or SNHL, regardless of intervention. Children with ANSD who received cochlear implants showed significant improvements in ITC with increasing experience with their implants. Conclusions Cortical phase coherence is significantly reduced as a result of both severe-to-profound SNHL and ANSD. Significance ITC provides a window into the brain oscillations underlying the averaged cortical auditory evoked response. Our results provide a first description of deficits in cortical phase synchrony in children with SNHL and ANSD. PMID:24360131

  18. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    Science.gov (United States)

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Visual cortical somatosensory and brainstem auditory evoked potentials following incidental irradiation of the rhombencephalon

    Energy Technology Data Exchange (ETDEWEB)

    Nightingale, S. (Royal Victoria Infirmary, Newcastle upon Tyne (UK)); Schofield, I.S.; Dawes, P.J.D.K. (Newcastle upon Tyne Univ. (UK). Newcastle General Hospital)

    1984-01-01

    Visual, cortical somatosensory and brainstem auditory evoked potentials were recorded before incidental irradiation of the rhombencephalon during radiotherapy in and around the middle ear, and at 11 weeks and eight months after completion of treatment. No patient experienced neurological symptoms during this period. No consistent changes in evoked potentials were found. The failure to demonstrate subclinical radiation-induced demyelination suggests either that the syndrome of early-delayed radiation rhombencephalopathy occurs in an idiosyncratic manner, or that any subclinical lesions are not detectable by serial evoked potential recordings.

  20. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing.

    Science.gov (United States)

    Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako

    2014-10-15

    When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.

  1. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  2. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  3. Visual cortical somatosensory and brainstem auditory evoked potentials following incidental irradiation of the rhombencephalon

    International Nuclear Information System (INIS)

    Nightingale, S.; Schofield, I.S.; Dawes, P.J.D.K.

    1984-01-01

    Visual, cortical somatosensory and brainstem auditory evoked potentials were recorded before incidental irradiation of the rhombencephalon during radiotherapy in and around the middle ear, and at 11 weeks and eight months after completion of treatment. No patient experienced neurological symptoms during this period. No consistent changes in evoked potentials were found. The failure to demonstrate subclinical radiation-induced demyelination suggests either that the syndrome of early-delayed radiation rhombencephalopathy occurs in an idiosyncratic manner, or that any subclinical lesions are not detectable by serial evoked potential recordings. (author)

  4. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  5. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  6. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  7. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  8. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  9. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael

    2014-01-01

    The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate

  10. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    Science.gov (United States)

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  12. Cortical thickness development of human primary visual cortex related to the age of blindness onset.

    Science.gov (United States)

    Li, Qiaojun; Song, Ming; Xu, Jiayuan; Qin, Wen; Yu, Chunshui; Jiang, Tianzi

    2017-08-01

    Blindness primarily induces structural alteration in the primary visual cortex (V1). Some studies have found that the early blind subjects had a thicker V1 compared to sighted controls, whereas late blind subjects showed no significant differences in the V1. This implies that the age of blindness onset may exert significant effects on the development of cortical thickness of the V1. However, no previous research used a trajectory of the age of blindness onset-related changes to investigate these effects. Here we explored this issue by mapping the cortical thickness trajectory of the V1 against the age of blindness onset using data from 99 blind individuals whose age of blindness onset ranged from birth to 34 years. We found that the cortical thickness of the V1 could be fitted well with a quadratic curve in both the left (F = 11.59, P = 3 × 10 -5 ) and right hemispheres (F = 6.54, P = 2 × 10 -3 ). Specifically, the cortical thickness of the V1 thinned rapidly during childhood and adolescence and did not change significantly thereafter. This trend was not observed in the primary auditory cortex (A1), primary motor cortex (M1), or primary somatosensory cortex (S1). These results provide evidence that an onset of blindness before adulthood significantly affects the cortical thickness of the V1 and suggest a critical period for cortical development of the human V1.

  13. Subcortical pathways: Towards a better understanding of auditory disorders.

    Science.gov (United States)

    Felix, Richard A; Gourévitch, Boris; Portfors, Christine V

    2018-05-01

    Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.

    Science.gov (United States)

    Köver, Hania; Bao, Shaowen

    2010-05-05

    Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.

  15. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  16. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    Science.gov (United States)

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  17. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  18. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    Science.gov (United States)

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  19. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  20. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  1. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  2. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  3. Hearing an Illusory Vowel in Noise : Suppression of Auditory Cortical Activity

    NARCIS (Netherlands)

    Riecke, Lars; Vanbussel, Mieke; Hausfeld, Lars; Baskent, Deniz; Formisano, Elia; Esposito, Fabrizio

    2012-01-01

    Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review,

  4. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Science.gov (United States)

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  5. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  6. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  7. Audiovisual functional magnetic resonance imaging adaptation reveals multisensory integration effects in object-related sensory cortices.

    Science.gov (United States)

    Doehrmann, Oliver; Weigelt, Sarah; Altmann, Christian F; Kaiser, Jochen; Naumer, Marcus J

    2010-03-03

    Information integration across different sensory modalities contributes to object recognition, the generation of associations and long-term memory representations. Here, we used functional magnetic resonance imaging adaptation to investigate the presence of sensory integrative effects at cortical levels as early as nonprimary auditory and extrastriate visual cortices, which are implicated in intermediate stages of object processing. Stimulation consisted of an adapting audiovisual stimulus S(1) and a subsequent stimulus S(2) from the same basic-level category (e.g., cat). The stimuli were carefully balanced with respect to stimulus complexity and semantic congruency and presented in four experimental conditions: (1) the same image and vocalization for S(1) and S(2), (2) the same image and a different vocalization, (3) different images and the same vocalization, or (4) different images and vocalizations. This two-by-two factorial design allowed us to assess the contributions of auditory and visual stimulus repetitions and changes in a statistically orthogonal manner. Responses in visual regions of right fusiform gyrus and right lateral occipital cortex were reduced for repeated visual stimuli (repetition suppression). Surprisingly, left lateral occipital cortex showed stronger responses to repeated auditory stimuli (repetition enhancement). Similarly, auditory regions of interest of the right middle superior temporal gyrus and sulcus exhibited repetition suppression to auditory repetitions and repetition enhancement to visual repetitions. Our findings of crossmodal repetition-related effects in cortices of the respective other sensory modality add to the emerging view that in human subjects sensory integrative mechanisms operate on earlier cortical processing levels than previously assumed.

  8. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  10. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2012-09-12

    Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this

  11. Transitional Probabilities Are Prioritized over Stimulus/Pattern Probabilities in Auditory Deviance Detection: Memory Basis for Predictive Sound Processing.

    Science.gov (United States)

    Mittag, Maria; Takegata, Rika; Winkler, István

    2016-09-14

    Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our

  12. Parvalbumin immunoreactivity in the auditory cortex of a mouse model of presbycusis.

    Science.gov (United States)

    Martin del Campo, H N; Measor, K R; Razak, K A

    2012-12-01

    Age-related hearing loss (presbycusis) affects ∼35% of humans older than sixty-five years. Symptoms of presbycusis include impaired discrimination of sounds with fast temporal features, such as those present in speech. Such symptoms likely arise because of central auditory system plasticity, but the underlying components are incompletely characterized. The rapid spiking inhibitory interneurons that co-express the calcium binding protein Parvalbumin (PV) are involved in shaping neural responses to fast spectrotemporal modulations. Here, we examined cortical PV expression in the C57bl/6 (C57) mouse, a strain commonly studied as a presbycusis model. We examined if PV expression showed auditory cortical field- and layer-specific susceptibilities with age. The percentage of PV-expressing cells relative to Nissl-stained cells was counted in the anterior auditory field (AAF) and primary auditory cortex (A1) in three age groups: young (1-2 months), middle-aged (6-8 months) and old (14-20 months). There were significant declines in the percentage of cells expressing PV at a detectable level in layers I-IV of both A1 and AAF in the old mice compared to young mice. In layers V-VI, there was an increase in the percentage of PV-expressing cells in the AAF of the old group. There were no changes in percentage of PV-expressing cells in layers V-VI of A1. These data suggest cortical layer(s)- and field-specific susceptibility of PV+ cells with presbycusis. The results are consistent with the hypothesis that a decline in inhibitory neurotransmission, particularly in the superficial cortical layers, occurs with presbycusis. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  14. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  16. Auditory verbal hallucinations are related to cortical thinning in the left middle temporal gyrus of patients with schizophrenia.

    Science.gov (United States)

    Cui, Y; Liu, B; Song, M; Lipnicki, D M; Li, J; Xie, S; Chen, Y; Li, P; Lu, L; Lv, L; Wang, H; Yan, H; Yan, J; Zhang, H; Zhang, D; Jiang, T

    2018-01-01

    Auditory verbal hallucinations (AVHs) are one of the most common and severe symptoms of schizophrenia, but the neuroanatomical abnormalities underlying AVHs are not well understood. The present study aims to investigate whether AVHs are associated with cortical thinning. Participants were schizophrenia patients from four centers across China, 115 with AVHs and 93 without AVHs, as well as 261 healthy controls. All received 3 T T1-weighted brain scans, and whole brain vertex-wise cortical thickness was compared across groups. Correlations between AVH severity and cortical thickness were also determined. The left middle part of the middle temporal gyrus (MTG) was significantly thinner in schizophrenia patients with AVHs than in patients without AVHs and healthy controls. Inferences were made using a false discovery rate approach with a threshold at p < 0.05. Left MTG thickness did not differ between patients without AVHs and controls. These results were replicated by a meta-analysis showing them to be consistent across the four centers. Cortical thickness of the left MTG was also found to be inversely correlated with hallucination severity across all schizophrenia patients. The results of this multi-center study suggest that an abnormally thin left MTG could be involved in the pathogenesis of AVHs in schizophrenia.

  17. Spiking in auditory cortex following thalamic stimulation is dominated by cortical network activity

    Science.gov (United States)

    Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.

    2014-01-01

    The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory

  18. Effects of aging and sensory loss on glial cells in mouse visual and auditory cortices

    Science.gov (United States)

    Tremblay, Marie-Ève; Zettel, Martha L.; Ison, James R.; Allen, Paul D.; Majewska, Ania K.

    2011-01-01

    Normal aging is often accompanied by a progressive loss of receptor sensitivity in hearing and vision, whose consequences on cellular function in cortical sensory areas have remained largely unknown. By examining the primary auditory (A1) and visual (V1) cortices in two inbred strains of mice undergoing either age-related loss of audition (C57BL/6J) or vision (CBA/CaJ), we were able to describe cellular and subcellular changes that were associated with normal aging (occurring in A1 and V1 of both strains) or specifically with age-related sensory loss (only in A1 of C57BL/6J or V1 of CBA/CaJ), using immunocytochemical electron microscopy and light microscopy. While the changes were subtle in neurons, glial cells and especially microglia were transformed in aged animals. Microglia became more numerous and irregularly distributed, displayed more variable cell body and process morphologies, occupied smaller territories, and accumulated phagocytic inclusions that often displayed ultrastructural features of synaptic elements. Additionally, evidence of myelination defects were observed, and aged oligodendrocytes became more numerous and were more often encountered in contiguous pairs. Most of these effects were profoundly exacerbated by age-related sensory loss. Together, our results suggest that the age-related alteration of glial cells in sensory cortical areas can be accelerated by activity-driven central mechanisms that result from an age-related loss of peripheral sensitivity. In light of our observations, these age-related changes in sensory function should be considered when investigating cellular, cortical and behavioral functions throughout the lifespan in these commonly used C57BL/6J and CBA/CaJ mouse models. PMID:22223464

  19. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  20. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  1. Auditory and Visual Electrophysiology of Deaf Children with Cochlear Implants: Implications for Cross-modal Plasticity.

    Science.gov (United States)

    Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon

    2017-01-01

    Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP

  2. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    Science.gov (United States)

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  3. Cortical inactivation by cooling in small animals

    Directory of Open Access Journals (Sweden)

    Ben eCoomber

    2011-06-01

    Full Text Available Reversible inactivation of the cortex by surface cooling is a powerful method for studying the function of a particular area. Implanted cooling cryoloops have been used to study the role of individual cortical areas in auditory processing of awake-behaving cats. Cryoloops have also been used in rodents for reversible inactivation of the cortex, but recently there has been a concern that the cryoloop may also cool non-cortical structures either directly or via the perfusion of blood, cooled as it passed close to the cooling loop. In this study we have confirmed that the loop can inactivate most of the auditory cortex without causing a significant reduction in temperature of the auditory thalamus or other sub-cortical structures. We placed a cryoloop on the surface of the guinea pig cortex, cooled it to 2°C and measured thermal gradients across the neocortical surface. We found that the temperature dropped to 20-24°C among cells within a radius of about 2.5mm away from the loop. This temperature drop was sufficient to reduce activity of most cortical cells and led to the inactivation of almost the entire auditory region. When the temperature of thalamus, midbrain, and middle ear were measured directly during cortical cooling, there was a small drop in temperature (about 4°C but this was not sufficient to directly reduce neural activity. In an effort to visualise the extent of neural inactivation we measured the uptake of thallium ions following an intravenous injection. This confirmed that there was a large reduction of activity across much of the ipsilateral cortex and only a small reduction in subcortical structures.

  4. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    Science.gov (United States)

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  5. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  6. Contralateral white noise attenuates 40-Hz auditory steady-state fields but not N100m in auditory evoked fields.

    Science.gov (United States)

    Kawase, Tetsuaki; Maki, Atsuko; Kanno, Akitake; Nakasato, Nobukazu; Sato, Mika; Kobayashi, Toshimitsu

    2012-01-16

    The different response characteristics of the different auditory cortical responses under conventional central masking conditions were examined by comparing the effects of contralateral white noise on the cortical component of 40-Hz auditory steady state fields (ASSFs) and the N100 m component in auditory evoked fields (AEFs) for tone bursts using a helmet-shaped magnetoencephalography system in 8 healthy volunteers (7 males, mean age 32.6 years). The ASSFs were elicited by monaural 1000 Hz amplitude modulation tones at 80 dB SPL, with the amplitude modulated at 39 Hz. The AEFs were elicited by monaural 1000 Hz tone bursts of 60 ms duration (rise and fall times of 10 ms, plateau time of 40 ms) at 80 dB SPL. The results indicated that continuous white noise at 70 dB SPL presented to the contralateral ear did not suppress the N100 m response in either hemisphere, but significantly reduced the amplitude of the 40-Hz ASSF in both hemispheres with asymmetry in that suppression of the 40-Hz ASSF was greater in the right hemisphere. Different effects of contralateral white noise on these two responses may reflect different functional auditory processes in the cortices. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Monkey’s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices

    Science.gov (United States)

    Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.

    2016-01-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975

  8. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  10. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  11. Transient and sustained cortical activity elicited by connected speech of varying intelligibility

    Directory of Open Access Journals (Sweden)

    Tiitinen Hannu

    2012-12-01

    Full Text Available Abstract Background The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear. The present magnetoencephalography (MEG study examined how the activity of auditory cortical areas is modulated by acoustic degradation and intelligibility of connected speech. The experimental design allowed for the comparison of cortical activity patterns elicited by acoustically identical stimuli which were perceived as either intelligible or unintelligible. Results In the experiment, a set of sentences was presented to the subject in distorted, undistorted, and again in distorted form. The intervening exposure to undistorted versions of sentences rendered the initially unintelligible, distorted sentences intelligible, as evidenced by an increase from 30% to 80% in the proportion of sentences reported as intelligible. These perceptual changes were reflected in the activity of the auditory cortex, with the auditory N1m response (~100 ms being more prominent for the distorted stimuli than for the intact ones. In the time range of auditory P2m response (>200 ms, auditory cortex as well as regions anterior and posterior to this area generated a stronger response to sentences which were intelligible than unintelligible. During the sustained field (>300 ms, stronger activity was elicited by degraded stimuli in auditory cortex and by intelligible sentences in areas posterior to auditory cortex. Conclusions The current findings suggest that the auditory system comprises bottom-up and top-down processes which are reflected in transient and sustained brain activity. It appears that analysis of acoustic features occurs

  12. Translating long-term potentiation from animals to humans: a novel method for noninvasive assessment of cortical plasticity.

    Science.gov (United States)

    Clapp, Wesley C; Hamm, Jeff P; Kirk, Ian J; Teyler, Timothy J

    2012-03-15

    Long-term potentiation (LTP) is a synaptic mechanism underlying learning and memory that has been studied extensively in laboratory animals. The study of LTP recently has been extended into humans with repetitive sensory stimulation to induce cortical LTP. In this review article, we will discuss past results from our group demonstrating that repetitive sensory stimulation (visual or auditory) induces LTP within the sensory cortex (visual/auditory, respectively) and can be measured noninvasively with electroencephalography or functional magnetic resonance imaging. We will discuss a number of studies that indicate that this form of LTP shares several characteristics with the synaptic LTP described in animals: it is frequency dependent, long-lasting (> 1 hour), input-specific, depotentiates with low-frequency stimulation, and is blocked by N-methyl-D-aspartate receptor blockers in rats. In this review, we also present new data with regard to the behavioral significance of human sensory LTP. These advances will permit enquiry into the functional significance of LTP that has been hindered by the absence of a human model. The ability to elicit LTP with a natural sensory stimulus noninvasively will provide a model system allowing the detailed examination of synaptic plasticity in normal subjects and might have future clinical applications in the diagnosis and assessment of neuropsychiatric and neurocognitive disorders. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. Gender-specific effects of prenatal and adolescent exposure to tobacco smoke on auditory and visual attention.

    Science.gov (United States)

    Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R

    2007-12-01

    Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.

  14. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Formisano, E.; Pepino, A.; Bracale, M.; Di Salle, F.; Lanfermann, H.; Zanella, F.E.

    1998-01-01

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  15. Thresholding of auditory cortical representation by background noise

    Science.gov (United States)

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  16. Thresholding of auditory cortical representation by background noise.

    Science.gov (United States)

    Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.

  17. Hearing after congenital deafness: central auditory plasticity and sensory deprivation.

    Science.gov (United States)

    Kral, A; Hartmann, R; Tillein, J; Heid, S; Klinke, R

    2002-08-01

    The congenitally deaf cat suffers from a degeneration of the inner ear. The organ of Corti bears no hair cells, yet the auditory afferents are preserved. Since these animals have no auditory experience, they were used as a model for congenital deafness. Kittens were equipped with a cochlear implant at different ages and electro-stimulated over a period of 2.0-5.5 months using a monopolar single-channel compressed analogue stimulation strategy (VIENNA-type signal processor). Following a period of auditory experience, we investigated cortical field potentials in response to electrical biphasic pulses applied by means of the cochlear implant. In comparison to naive unstimulated deaf cats and normal hearing cats, the chronically stimulated animals showed larger cortical regions producing middle-latency responses at or above 300 microV amplitude at the contralateral as well as the ipsilateral auditory cortex. The cortex ipsilateral to the chronically stimulated ear did not show any signs of reduced responsiveness when stimulating the 'untrained' ear through a second cochlear implant inserted in the final experiment. With comparable duration of auditory training, the activated cortical area was substantially smaller if implantation had been performed at an older age of 5-6 months. The data emphasize that young sensory systems in cats have a higher capacity for plasticity than older ones and that there is a sensitive period for the cat's auditory system.

  18. Logarithmic laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Koji Inui; Tomokazu Urakawa; Koya Yamashiro; Naofumi Otsuru; Yasuyuki Takeshima; Ryusuke Kakigi

    2009-01-01

    The cortical mechanisms underlying echoic memory and change detection were investigated using an auditory change-related component (N100c) of event-related brain potentials. N100c was elicited by paired sound stimuli, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of N100c elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1 ~ 1000 ms), ...

  19. Non-linear laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-01-01

    Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two to...

  20. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    Science.gov (United States)

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant

  1. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Formisano, E; Pepino, A; Bracale, M [Department of Electronic Engineering, Biomedical Unit, Universita di Napoli, Federic II, Italy, Via Claudio 21, 80125 Napoli (Italy); Di Salle, F [Department of Biomorphological and Functional Sciences, Radiologucal Unit, Universita di Napoli, Federic II, Italy, Via Claudio 21, 80125 Napoli (Italy); Lanfermann, H; Zanella, F E [Department of Neuroradiology, J.W. Goethe Universitat, Frankfurt/M. (Germany)

    1999-12-31

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors) 17 refs., 4 figs.

  3. Functional Changes in the Human Auditory Cortex in Ageing

    Science.gov (United States)

    Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef

    2015-01-01

    Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519

  4. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    Science.gov (United States)

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  5. Matrix metalloproteinase-9 deletion rescues auditory evoked potential habituation deficit in a mouse model of Fragile X Syndrome.

    Science.gov (United States)

    Lovelace, Jonathan W; Wen, Teresa H; Reinhard, Sarah; Hsu, Mike S; Sidhu, Harpreet; Ethell, Iryna M; Binder, Devin K; Razak, Khaleel A

    2016-05-01

    Sensory processing deficits are common in autism spectrum disorders, but the underlying mechanisms are unclear. Fragile X Syndrome (FXS) is a leading genetic cause of intellectual disability and autism. Electrophysiological responses in humans with FXS show reduced habituation with sound repetition and this deficit may underlie auditory hypersensitivity in FXS. Our previous study in Fmr1 knockout (KO) mice revealed an unusually long state of increased sound-driven excitability in auditory cortical neurons suggesting that cortical responses to repeated sounds may exhibit abnormal habituation as in humans with FXS. Here, we tested this prediction by comparing cortical event related potentials (ERP) recorded from wildtype (WT) and Fmr1 KO mice. We report a repetition-rate dependent reduction in habituation of N1 amplitude in Fmr1 KO mice and show that matrix metalloproteinase-9 (MMP-9), one of the known FMRP targets, contributes to the reduced ERP habituation. Our studies demonstrate a significant up-regulation of MMP-9 levels in the auditory cortex of adult Fmr1 KO mice, whereas a genetic deletion of Mmp-9 reverses ERP habituation deficits in Fmr1 KO mice. Although the N1 amplitude of Mmp-9/Fmr1 DKO recordings was larger than WT and KO recordings, the habituation of ERPs in Mmp-9/Fmr1 DKO mice is similar to WT mice implicating MMP-9 as a potential target for reversing sensory processing deficits in FXS. Together these data establish ERP habituation as a translation relevant, physiological pre-clinical marker of auditory processing deficits in FXS and suggest that abnormal MMP-9 regulation is a mechanism underlying auditory hypersensitivity in FXS. Fragile X Syndrome (FXS) is the leading known genetic cause of autism spectrum disorders. Individuals with FXS show symptoms of auditory hypersensitivity. These symptoms may arise due to sustained neural responses to repeated sounds, but the underlying mechanisms remain unclear. For the first time, this study shows deficits

  6. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    Science.gov (United States)

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  7. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    Science.gov (United States)

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  8. Cortical oscillatory activity during spatial echoic memory.

    Science.gov (United States)

    Kaiser, Jochen; Walker, Florian; Leiberg, Susanne; Lutzenberger, Werner

    2005-01-01

    In human magnetoencephalogram, we have found gamma-band activity (GBA), a putative measure of cortical network synchronization, during both bottom-up and top-down auditory processing. When sound positions had to be retained in short-term memory for 800 ms, enhanced GBA was detected over posterior parietal cortex, possibly reflecting the activation of higher sensory storage systems along the hypothesized auditory dorsal space processing stream. Additional prefrontal GBA increases suggested an involvement of central executive networks in stimulus maintenance. The present study assessed spatial echoic memory with the same stimuli but a shorter memorization interval of 200 ms. Statistical probability mapping revealed posterior parietal GBA increases at 80 Hz near the end of the memory phase and both gamma and theta enhancements in response to the test stimulus. In contrast to the previous short-term memory study, no prefrontal gamma or theta enhancements were detected. This suggests that spatial echoic memory is performed by networks along the putative auditory dorsal stream, without requiring an involvement of prefrontal executive regions.

  9. Comparative cortical bone thickness between the long bones of humans and five common non-human mammal taxa.

    Science.gov (United States)

    Croker, Sarah L; Reed, Warren; Donlon, Denise

    2016-03-01

    The task of identifying fragments of long bone shafts as human or non-human is difficult but necessary, for both forensic and archaeological cases, and a fast simple method is particularly useful. Previous literature suggests there may be differences in the thickness of the cortical bone between these two groups, but this has not been tested thoroughly. The aim of this study was not only to test this suggestion, but also to provide data that could be of practical assistance for future comparisons. The major limb bones (humerus, radius, femur and tibia) of 50 Caucasoid adult skeletons of known age and sex were radiographed, along with corresponding skeletal elements from sheep, pigs, cattle, large dogs and kangaroos. Measurements were taken from the radiographs at five points along the bone shaft, of shaft diameter, cortical bone thickness, and a cortical thickness index (sum of cortices divided by shaft diameter) in both anteroposterior and mediolateral orientations. Each variable for actual cortical bone thickness as well as cortical thickness indices were compared between the human group (split by sex) and each of the non-human groups in turn, using Student's t-tests. Results showed that while significant differences did exist between the human groups and many of the non-human groups, these were not all in the same direction. That is, some variables in the human groups were significantly greater than, and others were significantly less than, the corresponding variable in the non-human groups, depending on the particular non-human group, sex of the human group, or variable under comparison. This was the case for measurements of both actual cortical bone thickness and cortical thickness index. Therefore, for bone shaft fragments for which the skeletal element is unknown, the overlap in cortical bone thickness between different areas of different bones is too great to allow identification using this method alone. However, by providing extensive cortical bone

  10. How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices.

    Science.gov (United States)

    Twomey, Tae; Waters, Dafydd; Price, Cathy J; Evans, Samuel; MacSweeney, Mairéad

    2017-09-27

    To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants. SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was

  11. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  12. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long

  13. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    Science.gov (United States)

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system

  14. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  15. Music training relates to the development of neural mechanisms of selective auditory attention.

    Science.gov (United States)

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  18. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  19. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  20. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  1. Focal Cortical Thickness Correlates of Exceptional Memory Training in Vedic Priests

    Directory of Open Access Journals (Sweden)

    Giridhar Padmanabhan Kalamangalam

    2014-10-01

    Full Text Available The capacity for semantic memory – the ability to acquire and store knowledge of the world - is highly developed in the human brain. In particular, semantic memory assimilated through an auditory route may be a uniquely human capacity. One method of obtaining neurobiological insight into auditory semantic memory mechanisms is through the study of experts. In this work, we study a group of Hindu Vedic priests, whose religious training requires the memorization of vast tracts of scriptural texts through an oral tradition, recalled spontaneously during a lifetime of subsequent spiritual practice. We demonstrate focal increases of cortical thickness in the dominant prefrontal lobe and non-dominant temporal lobe in Vedic priests, in comparison to a group of matched controls. The findings are relevant to current hypotheses regarding cognitive processes underlying storage and recall of long-term declarative memory.

  2. Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.

    Science.gov (United States)

    Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W

    2006-05-15

    In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.

  3. Background noise can enhance cortical auditory evoked potentials under certain conditions.

    Science.gov (United States)

    Papesh, Melissa A; Billings, Curtis J; Baltzell, Lucas S

    2015-07-01

    To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30dB. The syllable was presented binaurally and monaurally at two presentation rates. The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses. Published by Elsevier Ireland Ltd.

  4. Inhibition of histone deacetylase 3 via RGFP966 facilitates cortical plasticity underlying unusually accurate auditory associative cue memory for excitatory and inhibitory cue-reward associations.

    Science.gov (United States)

    Shang, Andrea; Bylipudi, Sooraz; Bieszczad, Kasia M

    2018-05-31

    Epigenetic mechanisms are key for regulating long-term memory (LTM) and are known to exert control on memory formation in multiple systems of the adult brain, including the sensory cortex. One epigenetic mechanism is chromatin modification by histone acetylation. Blocking the action of histone de-acetylases (HDACs) that normally negatively regulate LTM by repressing transcription, has been shown to enable memory formation. Indeed, HDAC-inhibition appears to facilitate memory by altering the dynamics of gene expression events important for memory consolidation. However less understood are the ways in which molecular-level consolidation processes alter subsequent memory to enhance storage or facilitate retrieval. Here we used a sensory perspective to investigate whether the characteristics of memory formed with HDAC inhibitors are different from naturally-formed memory. One possibility is that HDAC inhibition enables memory to form with greater sensory detail than normal. Because the auditory system undergoes learning-induced remodeling that provides substrates for sound-specific LTM, we aimed to identify behavioral effects of HDAC inhibition on memory for specific sound features using a standard model of auditory associative cue-reward learning, memory, and cortical plasticity. We found that three systemic post-training treatments of an HDAC3-inhibitor (RGPF966, Abcam Inc.) in rats in the early phase of training facilitated auditory discriminative learning, changed auditory cortical tuning, and increased the specificity for acoustic frequency formed in memory of both excitatory (S+) and inhibitory (S-) associations for at least 2 weeks. The findings support that epigenetic mechanisms act on neural and behavioral sensory acuity to increase the precision of associative cue memory, which can be revealed by studying the sensory characteristics of long-term associative memory formation with HDAC inhibitors. Published by Elsevier B.V.

  5. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  6. Visualization of migration of human cortical neurons generated from induced pluripotent stem cells.

    Science.gov (United States)

    Bamba, Yohei; Kanemura, Yonehiro; Okano, Hideyuki; Yamasaki, Mami

    2017-09-01

    Neuronal migration is considered a key process in human brain development. However, direct observation of migrating human cortical neurons in the fetal brain is accompanied by ethical concerns and is a major obstacle in investigating human cortical neuronal migration. We established a novel system that enables direct visualization of migrating cortical neurons generated from human induced pluripotent stem cells (hiPSCs). We observed the migration of cortical neurons generated from hiPSCs derived from a control and from a patient with lissencephaly. Our system needs no viable brain tissue, which is usually used in slice culture. Migratory behavior of human cortical neuron can be observed more easily and more vividly by its fluorescence and glial scaffold than that by earlier methods. Our in vitro experimental system provides a new platform for investigating development of the human central nervous system and brain malformation. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Auditory analysis for speech recognition based on physiological models

    Science.gov (United States)

    Jeon, Woojay; Juang, Biing-Hwang

    2004-05-01

    To address the limitations of traditional cepstrum or LPC based front-end processing methods for automatic speech recognition, more elaborate methods based on physiological models of the human auditory system may be used to achieve more robust speech recognition in adverse environments. For this purpose, a modified version of a model of the primary auditory cortex featuring a three dimensional mapping of auditory spectra [Wang and Shamma, IEEE Trans. Speech Audio Process. 3, 382-395 (1995)] is adopted and investigated for its use as an improved front-end processing method. The study is conducted in two ways: first, by relating the model's redundant representation to traditional spectral representations and showing that the former not only encompasses information provided by the latter, but also reveals more relevant information that makes it superior in describing the identifying features of speech signals; and second, by observing the statistical features of the representation for various classes of sound to show how different identifying features manifest themselves as specific patterns on the cortical map, thereby becoming a place-coded data set on which detection theory could be applied to simulate auditory perception and cognition.

  8. The non-lemniscal auditory cortex in ferrets: convergence of corticotectal inputs in the superior colliculus

    Directory of Open Access Journals (Sweden)

    Victoria M Bajo

    2010-05-01

    Full Text Available Descending cortical inputs to the superior colliculus (SC contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG, and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG, predominantly in the tonotopically-organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically-responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers.

  9. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    Science.gov (United States)

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  10. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.

    Science.gov (United States)

    Hickok, Gregory; Buchsbaum, Bradley; Humphries, Colin; Muftuler, Tugan

    2003-07-01

    The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.

  11. Diffusion tractography of the subcortical auditory system in a postmortem human brain

    OpenAIRE

    Sitek, Kevin

    2017-01-01

    The subcortical auditory system is challenging to identify with standard human brain imaging techniques: MRI signal decreases toward the center of the brain as well as at higher resolution, both of which are necessary for imaging small brainstem auditory structures.Using high-resolution diffusion-weighted MRI, we asked:Can we identify auditory structures and connections in high-resolution ex vivo images?Which structures and connections can be mapped in vivo?

  12. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  13. Cortical specialisation to social stimuli from the first days to the second year of life: A rural Gambian cohort

    Directory of Open Access Journals (Sweden)

    S. Lloyd-Fox

    2017-06-01

    Full Text Available Brain and nervous system development in human infants during the first 1000 days (conception to two years of age is critical, and compromised development during this time (such as from under nutrition or poverty can have life-long effects on physical growth and cognitive function. Cortical mapping of cognitive function during infancy is poorly understood in resource-poor settings due to the lack of transportable and low-cost neuroimaging methods. Having established a signature cortical response to social versus non-social visual and auditory stimuli in infants from 4 to 6 months of age in the UK, here we apply this functional Near Infrared Spectroscopy (fNIRS paradigm to investigate social responses in infants from the first postnatal days to the second year of life in two contrasting environments: rural Gambian and urban UK. Results reveal robust, localized, socially selective brain responses from 9 to 24 months of life to both the visual and auditory stimuli. In contrast at 0–2 months of age infants exhibit non-social auditory selectivity, an effect that persists until 4–8 months when we observe a transition to greater social stimulus selectivity. These findings reveal a robust developmental curve of cortical specialisation over the first two years of life.

  14. Acoustic Trauma Changes the Parvalbumin-Positive Neurons in Rat Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Congli Liu

    2018-01-01

    Full Text Available Acoustic trauma is being reported to damage the auditory periphery and central system, and the compromised cortical inhibition is involved in auditory disorders, such as hyperacusis and tinnitus. Parvalbumin-containing neurons (PV neurons, a subset of GABAergic neurons, greatly shape and synchronize neural network activities. However, the change of PV neurons following acoustic trauma remains to be elucidated. The present study investigated how auditory cortical PV neurons change following unilateral 1 hour noise exposure (left ear, one octave band noise centered at 16 kHz, 116 dB SPL. Noise exposure elevated the auditory brainstem response threshold of the exposed ear when examined 7 days later. More detectable PV neurons were observed in both sides of the auditory cortex of noise-exposed rats when compared to control. The detectable PV neurons of the left auditory cortex (ipsilateral to the exposed ear to noise exposure outnumbered those of the right auditory cortex (contralateral to the exposed ear. Quantification of Western blotted bands revealed higher expression level of PV protein in the left cortex. These findings of more active PV neurons in noise-exposed rats suggested that a compensatory mechanism might be initiated to maintain a stable state of the brain.

  15. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  16. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    Science.gov (United States)

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  17. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  18. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    Science.gov (United States)

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  19. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  20. Early development of synchrony in cortical activations in the human.

    Science.gov (United States)

    Koolen, N; Dereymaeker, A; Räsänen, O; Jansen, K; Vervisch, J; Matic, V; Naulaers, G; De Vos, M; Van Huffel, S; Vanhatalo, S

    2016-05-13

    Early intermittent cortical activity is thought to play a crucial role in the growth of neuronal network development, and large scale brain networks are known to provide the basis for higher brain functions. Yet, the early development of the large scale synchrony in cortical activations is unknown. Here, we tested the hypothesis that the early intermittent cortical activations seen in the human scalp EEG show a clear developmental course during the last trimester of pregnancy, the period of intensive growth of cortico-cortical connections. We recorded scalp EEG from altogether 22 premature infants at post-menstrual age between 30 and 44 weeks, and the early cortical synchrony was quantified using recently introduced activation synchrony index (ASI). The developmental correlations of ASI were computed for individual EEG signals as well as anatomically and mathematically defined spatial subgroups. We report two main findings. First, we observed a robust and statistically significant increase in ASI in all cortical areas. Second, there were significant spatial gradients in the synchrony in fronto-occipital and left-to-right directions. These findings provide evidence that early cortical activity is increasingly synchronized across the neocortex. The ASI-based metrics introduced in our work allow direct translational comparison to in vivo animal models, as well as hold promise for implementation as a functional developmental biomarker in future research on human neonates. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Histomorphometry and cortical robusticity of the adult human femur.

    Science.gov (United States)

    Miszkiewicz, Justyna Jolanta; Mahoney, Patrick

    2018-01-13

    Recent quantitative analyses of human bone microanatomy, as well as theoretical models that propose bone microstructure and gross anatomical associations, have started to reveal insights into biological links that may facilitate remodeling processes. However, relationships between bone size and the underlying cortical bone histology remain largely unexplored. The goal of this study is to determine the extent to which static indicators of bone remodeling and vascularity, measured using histomorphometric techniques, relate to femoral midshaft cortical width and robusticity. Using previously published and new quantitative data from 450 adult human male (n = 233) and female (n = 217) femora, we determine if these aspects of femoral size relate to bone microanatomy. Scaling relationships are explored and interpreted within the context of tissue form and function. Analyses revealed that the area and diameter of Haversian canals and secondary osteons, and densities of secondary osteons and osteocyte lacunae from the sub-periosteal region of the posterior midshaft femur cortex were significantly, but not consistently, associated with femoral size. Cortical width and bone robusticity were correlated with osteocyte lacunae density and scaled with positive allometry. Diameter and area of osteons and Haversian canals decreased as the width of cortex and bone robusticity increased, revealing a negative allometric relationship. These results indicate that microscopic products of cortical bone remodeling and vascularity are linked to femur size. Allometric relationships between more robust human femora with thicker cortical bone and histological products of bone remodeling correspond with principles of bone functional adaptation. Future studies may benefit from exploring scaling relationships between bone histomorphometric data and measurements of bone macrostructure.

  2. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  3. Cortical surface area and cortical thickness in the precuneus of adult humans.

    Science.gov (United States)

    Bruner, E; Román, F J; de la Cuétara, J M; Martin-Loeches, M; Colom, R

    2015-02-12

    The precuneus has received considerable attention in the last decade, because of its cognitive functions, its role as a central node of the brain networks, and its involvement in neurodegenerative processes. Paleoneurological studies suggested that form changes in the deep parietal areas represent a major character associated with the origin of the modern human brain morphology. A recent neuroanatomical survey based on shape analysis suggests that the proportions of the precuneus are also a determinant source of overall brain geometrical differences among adult individuals, influencing the brain spatial organization. Here, we evaluate the variation of cortical thickness and cortical surface area of the precuneus in a sample of adult humans, and their relation with geometry and cognition. Precuneal thickness and surface area are not correlated. There is a marked individual variation. The right precuneus is thinner and larger than the left one, but there are relevant fluctuating asymmetries, with only a modest correlation between the hemispheres. Males have a thicker cortex but differences in cortical area are not significant between sexes. The surface area of the precuneus shows a positive allometry with the brain surface area, although the correlation is modest. The dilation/contraction of the precuneus, described as a major factor of variability within adult humans, is associated with absolute increase/decrease of its surface, but not with variation in thickness. Precuneal thickness, precuneal surface area and precuneal morphology are not correlated with psychological factors such as intelligence, working memory, attention control, and processing speed, stressing further possible roles of this area in supporting default mode functions. Beyond gross morphology, the processes underlying the large phenotypic variation of the precuneus must be further investigated through specific cellular analyses, aimed at considering differences in cellular size, density

  4. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  5. Differences in auditory timing between human and nonhuman primates

    NARCIS (Netherlands)

    Honing, H.; Merchant, H.

    2014-01-01

    The gradual audiomotor evolution hypothesis is proposed as an alternative interpretation to the auditory timing mechanisms discussed in Ackermann et al.'s article. This hypothesis accommodates the fact that the performance of nonhuman primates is comparable to humans in single-interval tasks (such

  6. Gray matter density of auditory association cortex relates to knowledge of sound concepts in primary progressive aphasia.

    Science.gov (United States)

    Bonner, Michael F; Grossman, Murray

    2012-06-06

    Long-term memory integrates the multimodal information acquired through perception into unified concepts, supporting object recognition, thought, and language. While some theories of human cognition have considered concepts to be abstract symbols, recent functional neuroimaging evidence has supported an alternative theory: that concepts are multimodal representations associated with the sensory and motor systems through which they are acquired. However, few studies have examined the effects of cortical lesions on the sensory and motor associations of concepts. We tested the hypothesis that individuals with disease in auditory association cortex would have difficulty processing concepts with strong sound associations (e.g., thunder). Human participants with the logopenic variant of primary progressive aphasia (lvPPA) performed a recognition task on words with strong associations in three modalities: Sound, Sight, and Manipulation. LvPPA participants had selective difficulty on Sound words relative to other modalities. Structural MRI analysis in lvPPA revealed gray matter atrophy in auditory association cortex, as defined functionally in a separate BOLD fMRI study of healthy adults. Moreover, lvPPA showed reduced gray matter density in the region of auditory association cortex that healthy participants activated when processing the same Sound words in a separate BOLD fMRI experiment. Finally, reduced gray matter density in this region in lvPPA directly correlated with impaired performance on Sound words. These findings support the hypothesis that conceptual memories are represented in the sensory and motor association cortices through which they are acquired.

  7. Discrimination of cortical laminae using MEG.

    Science.gov (United States)

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bestmann, Sven; Barnes, Gareth

    2014-11-15

    Typically MEG source reconstruction is used to estimate the distribution of current flow on a single anatomically derived cortical surface model. In this study we use two such models representing superficial and deep cortical laminae. We establish how well we can discriminate between these two different cortical layer models based on the same MEG data in the presence of different levels of co-registration noise, Signal-to-Noise Ratio (SNR) and cortical patch size. We demonstrate that it is possible to make a distinction between superficial and deep cortical laminae for levels of co-registration noise of less than 2mm translation and 2° rotation at SNR > 11 dB. We also show that an incorrect estimate of cortical patch size will tend to bias layer estimates. We then use a 3D printed head-cast (Troebinger et al., 2014) to achieve comparable levels of co-registration noise, in an auditory evoked response paradigm, and show that it is possible to discriminate between these cortical layer models in real data. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Impaired auditory sampling in dyslexia: Further evidence from combined fMRI and EEG

    Directory of Open Access Journals (Sweden)

    Katia eLehongre

    2013-08-01

    Full Text Available The aim of the present study was to explore auditory cortical oscillation properties in developmental dyslexia. We recorded cortical activity in 17 dyslexic participants and 15 matched controls using simultaneous EEG and fMRI during passive viewing of an audiovisual movie. We compared the distribution of brain oscillations in the delta, theta and gamma ranges over left and right auditory cortices. In controls, our results are consistent with the hypothesis that there is a dominance of gamma oscillations in the left hemisphere and a dominance of delta-theta oscillations in the right hemisphere. In dyslexics, we did not find such an interaction, but similar oscillations in both hemispheres. Thus, our results confirm that the primary cortical disruption in dyslexia lies in a lack of hemispheric specialization for gamma oscillations, which might disrupt the representation of or the access to phonemic units.

  10. Encoding of frequency-modulation (FM) rates in human auditory cortex.

    Science.gov (United States)

    Okamoto, Hidehiko; Kakigi, Ryusuke

    2015-12-14

    Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.

  11. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    Science.gov (United States)

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.

  12. Cortico-Cortical Receptive Field Estimates in Human Visual Cortex

    Directory of Open Access Journals (Sweden)

    Koen V Haak

    2012-05-01

    Full Text Available Human visual cortex comprises many visual areas that contain a map of the visual field (Wandell et al 2007, Neuron 56, 366–383. These visual field maps can be identified readily in individual subjects with functional magnetic resonance imaging (fMRI during experimental sessions that last less than an hour (Wandell and Winawer 2011, Vis Res 718–737. Hence, visual field mapping with fMRI has been, and still is, a heavily used technique to examine the organisation of both normal and abnormal human visual cortex (Haak et al 2011, ACNR, 11(3, 20–21. However, visual field mapping cannot reveal every aspect of human visual cortex organisation. For example, the information processed within a visual field map arrives from somewhere and is sent to somewhere, and visual field mapping does not derive these input/output relationships. Here, we describe a new, model-based analysis for estimating the dependence between signals in distinct cortical regions using functional magnetic resonance imaging (fMRI data. Just as a stimulus-referred receptive field predicts the neural response as a function of the stimulus contrast, the neural-referred receptive field predicts the neural response as a function of responses elsewhere in the nervous system. When applied to two cortical regions, this function can be called the cortico-cortical receptive field (CCRF. We model the CCRF as a Gaussian-weighted region on the cortical surface and apply the model to data from both stimulus-driven and resting-state experimental conditions in visual cortex.

  13. Cochlear injury and adaptive plasticity of the auditory cortex

    Directory of Open Access Journals (Sweden)

    ANNA R. eFETONI

    2015-02-01

    Full Text Available Growing evidence suggests that cochlear stressors as noise exposure and aging can induce homeostatic/maladaptive changes in the central auditory system from the brainstem to the cortex. Studies centered on such changes have revealed several mechanisms that operate in the context of sensory disruption after insult (noise trauma, drug- or age-related injury. The oxidative stress is central to current theories of induced sensory neural hearing loss and aging, and interventions to attenuate the hearing loss are based on antioxidant agent. The present review addresses the recent literature on the alterations in hair cells and spiral ganglion neurons due to noise-induced oxidative stress in the cochlea, as well on the impact of cochlear damage on the auditory cortex neurons. The emerging image emphasizes that noise-induced deafferentation and upward spread of cochlear damage is associated with the altered dendritic architecture of auditory pyramidal neurons. The cortical modifications may be reversed by treatment with antioxidants counteracting the cochlear redox imbalance. These findings open new therapeutic approaches to treat the functional consequences of the cortical reorganization following cochlear damage.

  14. Spectro-temporal analysis of complex tones: two cortical processes dependent on retention of sounds in the long auditory store.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L

    2000-09-01

    To examine whether two cortical processes concerned with spectro-temporal analysis of complex tones, a 'C-process' generating CN1 and CP2 potentials at cf. 100 and 180 ms after sudden change of pitch or timbre, and an 'M-process' generating MN1 and MP2 potentials of similar latency at the sudden cessation of repeated changes, are dependent on accumulation of a sound image in the long auditory store. The durations of steady (440 Hz) and rapidly oscillating (440-494 Hz, 16 changes/s) pitch of a synthesized 'clarinet' tone were reciprocally varied between 0.5 and 4.5 s within a duty cycle of 5 s. Potentials were recorded at the beginning and end of the period of oscillation in 10 non-attending normal subjects. The CN1 at the beginning of pitch oscillation and the MN1 at the end were both strongly influenced by the duration of the immediately preceding stimulus pattern, mean amplitudes being 3-4 times larger after 4.5 s as compared with 0.5 s. The processes responsible for both CN1 and MN1 are influenced by the duration of the preceding sound pattern over a period comparable to that of the 'echoic memory' or long auditory store. The store therefore appears to occupy a key position in spectro-temporal sound analysis. The C-process is concerned with the spectral structure of complex sounds, and may therefore reflect the 'grouping' of frequency components underlying auditory stream segregation. The M-process (mismatch negativity) is concerned with the temporal sound structure, and may play an important role in the extraction of information from sequential sounds.

  15. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input.

    Directory of Open Access Journals (Sweden)

    Max F K Happel

    Full Text Available Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex.

  16. Delayed access to bilateral input alters cortical organization in children with asymmetric hearing

    Directory of Open Access Journals (Sweden)

    Melissa Jane Polonenko

    2018-01-01

    Full Text Available Bilateral hearing in early development protects auditory cortices from reorganizing to prefer the better ear. Yet, such protection could be disrupted by mismatched bilateral input in children with asymmetric hearing who require electric stimulation of the auditory nerve from a cochlear implant in their deaf ear and amplified acoustic sound from a hearing aid in their better ear (bimodal hearing. Cortical responses to bimodal stimulation were measured by electroencephalography in 34 bimodal users and 16 age-matched peers with normal hearing, and compared with the same measures previously reported for 28 age-matched bilateral implant users. Both auditory cortices increasingly favoured the better ear with delay to implanting the deaf ear; the time course mirrored that occurring with delay to bilateral implantation in unilateral implant users. Preference for the implanted ear tended to occur with ongoing implant use when hearing was poor in the non-implanted ear. Speech perception deteriorated with longer deprivation and poorer access to high-frequencies. Thus, cortical preference develops in children with asymmetric hearing but can be avoided by early provision of balanced bimodal stimulation. Although electric and acoustic stimulation differ, these inputs can work sympathetically when used bilaterally given sufficient hearing in the non-implanted ear.

  17. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Spatial organization of frequency preference and selectivity in the human inferior colliculus

    OpenAIRE

    De Martino, Federico; Moerel, Michelle; van de Moortele, Pierre-Francois; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia

    2013-01-01

    To date, the functional organization of human auditory sub-cortical structures can only be inferred from animal models. Here we use high-resolution functional MRI at ultra-high magnetic fields (7 Tesla) to map the organization of spectral responses in the human inferior colliculus (hIC), a sub-cortical structure fundamental for sound processing. We reveal a tonotopic map with a spatial gradient of preferred frequencies approximately oriented from dorso-lateral (low frequencies) to ventro-medi...

  19. Benefits and detriments of unilateral cochlear implant use on bilateral auditory development in children who are deaf

    Directory of Open Access Journals (Sweden)

    Karen A. Gordon

    2013-10-01

    Full Text Available We have explored both the benefits and detriments of providing electrical input through a cochlear implant in one ear to the auditory system of young children. A cochlear implant delivers electrical pulses to stimulate the auditory nerve, providing children who are deaf with access to sound. The goals of implantation are to restrict reorganization of the deprived immature auditory brain and promote development of hearing and spoken language. It is clear that limiting the duration of deprivation is a key factor. Additional considerations are the onset, etiology, and use of residual hearing as each of these can have unique effects on auditory development in the pre-implant period. New findings show that many children receiving unilateral cochlear implants are developing mature-like brainstem and thalamo-cortical responses to sound with long term use despite these sources of variability; however, there remain considerable abnormalities in cortical function. The most apparent, determined by implanting the other ear and measuring responses to acute stimulation, is a loss of normal cortical response from the deprived ear. Recent data reveal that this can be avoided in children by early implantation of both ears simultaneously or with limited delay. We conclude that auditory development requires input early in development and from both ears.

  20. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

    Science.gov (United States)

    Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

    2013-08-29

    A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  2. The Hierarchical Cortical Organization of Human Speech Processing.

    Science.gov (United States)

    de Heer, Wendy A; Huth, Alexander G; Griffiths, Thomas L; Gallant, Jack L; Theunissen, Frédéric E

    2017-07-05

    Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to

  3. Auditory object perception: A neurobiological model and prospective review.

    Science.gov (United States)

    Brefczynski-Lewis, Julie A; Lewis, James W

    2017-10-01

    Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic

  4. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Encoding and retrieval of artificial visuoauditory memory traces in the auditory cortex requires the entorhinal cortex.

    Science.gov (United States)

    Chen, Xi; Guo, Yiping; Feng, Jingyu; Liao, Zhengli; Li, Xinjian; Wang, Haitao; Li, Xiao; He, Jufang

    2013-06-12

    Damage to the medial temporal lobe impairs the encoding of new memories and the retrieval of memories acquired immediately before the damage in human. In this study, we demonstrated that artificial visuoauditory memory traces can be established in the rat auditory cortex and that their encoding and retrieval depend on the entorhinal cortex of the medial temporal lobe in the rat. We trained rats to associate a visual stimulus with electrical stimulation of the auditory cortex using a classical conditioning protocol. After conditioning, we examined the associative memory traces electrophysiologically (i.e., visual stimulus-evoked responses of auditory cortical neurons) and behaviorally (i.e., visual stimulus-induced freezing and visual stimulus-guided reward retrieval). The establishment of a visuoauditory memory trace in the auditory cortex, which was detectable by electrophysiological recordings, was achieved over 20-30 conditioning trials and was blocked by unilateral, temporary inactivation of the entorhinal cortex. Retrieval of a previously established visuoauditory memory was also affected by unilateral entorhinal cortex inactivation. These findings suggest that the entorhinal cortex is necessary for the encoding and involved in the retrieval of artificial visuoauditory memory in the auditory cortex, at least during the early stages of memory consolidation.

  6. Probing region-specific microstructure of human cortical areas using high angular and spatial resolution diffusion MRI.

    Science.gov (United States)

    Aggarwal, Manisha; Nauen, David W; Troncoso, Juan C; Mori, Susumu

    2015-01-15

    Regional heterogeneity in cortical cyto- and myeloarchitecture forms the structural basis of mapping of cortical areas in the human brain. In this study, we investigate the potential of diffusion MRI to probe the microstructure of cortical gray matter and its region-specific heterogeneity across cortical areas in the fixed human brain. High angular resolution diffusion imaging (HARDI) data at an isotropic resolution of 92-μm and 30 diffusion-encoding directions were acquired using a 3D diffusion-weighted gradient-and-spin-echo sequence, from prefrontal (Brodmann area 9), primary motor (area 4), primary somatosensory (area 3b), and primary visual (area 17) cortical specimens (n=3 each) from three human subjects. Further, the diffusion MR findings in these cortical areas were compared with histological silver impregnation of the same specimens, in order to investigate the underlying architectonic features that constitute the microstructural basis of diffusion-driven contrasts in cortical gray matter. Our data reveal distinct and region-specific diffusion MR contrasts across the studied areas, allowing delineation of intracortical bands of tangential fibers in specific layers-layer I, layer VI, and the inner and outer bands of Baillarger. The findings of this work demonstrate unique sensitivity of diffusion MRI to differentiate region-specific cortical microstructure in the human brain, and will be useful for myeloarchitectonic mapping of cortical areas as well as to achieve an understanding of the basis of diffusion NMR contrasts in cortical gray matter. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Long-term exposure to noise impairs cortical sound processing and attention control.

    Science.gov (United States)

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  8. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    Science.gov (United States)

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  9. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  10. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    Science.gov (United States)

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  11. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    Science.gov (United States)

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may

  12. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  13. The Right Hemisphere Planum Temporale Supports Enhanced Visual Motion Detection Ability in Deaf People: Evidence from Cortical Thickness.

    Science.gov (United States)

    Shiell, Martha M; Champoux, François; Zatorre, Robert J

    2016-01-01

    After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenitally deaf cat, and not in humans. Using T1-weighted magnetic resonance imaging, we measured cortical thickness in the planum temporale, Heschl's gyrus and sulcus, the middle temporal area MT+, and the calcarine sulcus, in early-deaf persons. We tested for a correlation between this measure and visual motion detection thresholds, a visual function where deaf people show enhancements as compared to hearing. We found that the cortical thickness of a region in the right hemisphere planum temporale, typically an auditory region, was greater in deaf individuals with better visual motion detection thresholds. This same region has previously been implicated in functional imaging studies as important for functional reorganization. The structure-behaviour correlation observed here demonstrates this area's involvement in compensatory vision and indicates an anatomical correlate, increased cortical thickness, of cross-modal plasticity.

  14. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  15. Interactions between thalamic and cortical rhythms during semantic memory recall in human

    Science.gov (United States)

    Slotnick, Scott D.; Moo, Lauren R.; Kraut, Michael A.; Lesser, Ronald P.; Hart, John, Jr.

    2002-04-01

    Human scalp electroencephalographic rhythms, indicative of cortical population synchrony, have long been posited to reflect cognitive processing. Although numerous studies employing simultaneous thalamic and cortical electrode recording in nonhuman animals have explored the role of the thalamus in the modulation of cortical rhythms, direct evidence for thalamocortical modulation in human has not, to our knowledge, been obtained. We simultaneously recorded from thalamic and scalp electrodes in one human during performance of a cognitive task and found a spatially widespread, phase-locked, low-frequency rhythm (7-8 Hz) power decrease at thalamus and scalp during semantic memory recall. This low-frequency rhythm power decrease was followed by a spatially specific, phase-locked, fast-rhythm (21-34 Hz) power increase at thalamus and occipital scalp. Such a pattern of thalamocortical activity reflects a plausible neural mechanism underlying semantic memory recall that may underlie other cognitive processes as well.

  16. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations

    Directory of Open Access Journals (Sweden)

    Gabriella Musacchia

    2017-08-01

    Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.

  17. Long latency auditory evoked potentials in children with cochlear implants: systematic review.

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Matas, Carla Gentile; Carvalho, Ana Claudia Martinho de

    2013-11-25

    The aim of this study was to analyze the findings on Cortical Auditory Evoked Potentials in children with cochlear implant through a systematic literature review. After formulation of research question and search of studies in four data bases with the following descriptors: electrophysiology (eletrofisiologia), cochlear implantation (implante coclear), child (criança), neuronal plasticity (plasticidade neuronal) and audiology (audiologia), were selected articles (original and complete) published between 2002 and 2013 in Brazilian Portuguese or English. A total of 208 studies were found; however, only 13 contemplated the established criteria and were further analyzed; was made data extraction for analysis of methodology and content of the studies. The results described suggest rapid changes in P1 component of Cortical Auditory Evoked Potentials in children with cochlear implants. Although there are few studies on the theme, cochlear implant has been shown to produce effective changes in central auditory path ways especially in children implanted before 3 years and 6 months of age.

  18. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    Science.gov (United States)

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  19. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Neurogenetics and auditory processing in developmental dyslexia.

    Science.gov (United States)

    Giraud, Anne-Lise; Ramus, Franck

    2013-02-01

    Dyslexia is a polygenic developmental reading disorder characterized by an auditory/phonological deficit. Based on the latest genetic and neurophysiological studies, we propose a tentative model in which phonological deficits could arise from genetic anomalies of the cortical micro-architecture in the temporal lobe. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. CORTICAL RESPONSES TO SALIENT NOCICEPTIVE AND NOT NOCICEPTIVE STIMULI IN VEGETATIVE AND MINIMAL CONSCIOUS STATE

    Directory of Open Access Journals (Sweden)

    MARINA eDE TOMMASO

    2015-01-01

    Full Text Available Aims Questions regarding perception of pain in non-communicating patients and the management of pain continue to raise controversy both at a clinical and ethical level. The aim of this study was to examine the cortical response to salient multimodal visual, acoustic, somatosensory electric non nociceptive and nociceptive laser stimuli and their correlation with the clinical evaluation.Methods: Five Vegetative State (VS, 4 Minimally Conscious State (MCS patients and 11 age- and sex-matched controls were examined. Evoked responses were obtained by 64 scalp electrodes, while delivering auditory, visual, non-noxious electrical and noxious laser stimulation, which were randomly presented every 10 sec. Laser, somatosensory, auditory and visual evoked responses were identified as a negative-positive (N2-P2 vertex complex in the 500 msec post-stimulus time. We used Nociception Coma Scale-Revised (NCS-R and Coma Recovery Scale (CRS-R for clinical evaluation of pain perception and consciousness impairment.Results: The laser evoked potentials (LEPs were recognizable in all cases. Only one MCS patient showed a reliable cortical response to all the employed stimulus modalities. One VS patient did not present cortical responses to any other stimulus modality. In the remaining participants, auditory, visual and electrical related potentials were inconstantly present. Significant N2 and P2 latency prolongation occurred in both VS and MCS patients. The presence of a reliable cortical response to auditory, visual and electric stimuli was able to correctly classify VS and MCS patients with 90% accuracy. Laser P2 and N2 amplitudes were not correlated with the CRS-R and NCS-R scores, while auditory and electric related potentials amplitude were associated with the motor response to pain and consciousness recovery. Discussion: pain arousal may be a primary function also in vegetative state patients while the relevance of other stimulus modalities may indicate the

  2. Evaluation of the Auditory Pathway in Traffic Policemen

    Directory of Open Access Journals (Sweden)

    Vipul Indora

    2017-04-01

    Full Text Available Background: Traffic policemen working at heavy traffic junctions are continuously exposed to high level of noise and its health consequences. Objective: To assess the hearing pathway in traffic policemen by means of brainstem evoked response audiometry (BERA, mid-latency response (MLR, and slow vertex response (SVR. Methods: In this observational comparative study, BERA, MLR, and SVR were tested in 35 male traffic policemen with field posting of more than 3 years. 35 age-matched men working in our college served as controls. Results: Increase in the latencies of waves I and III of BERA, and IPL I-III were observed. Compared to controls, the MLR and SVR waves showed no significant changes in studied policemen. Conclusion: We found that chronic exposure of traffic policemen to noise resulted in delayed conduction in peripheral part of the auditory pathway, ie, auditory nerve up to the level of superior olivary nucleus; no impairment was observed at the level of sub-cortical, cortical, or the association areas.

  3. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    Science.gov (United States)

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Higher cortical modulation of pain perception in the human brain: Psychological determinant.

    Science.gov (United States)

    Chen, Andrew Cn

    2009-10-01

    Pain perception and its genesis in the human brain have been reviewed recently. In the current article, the reports on pain modulation in the human brain were reviewed from higher cortical regulation, i.e. top-down effect, particularly studied in psychological determinants. Pain modulation can be examined by gene therapy, physical modulation, pharmacological modulation, psychological modulation, and pathophysiological modulation. In psychological modulation, this article examined (a) willed determination, (b) distraction, (c) placebo, (d) hypnosis, (e) meditation, (f) qi-gong, (g) belief, and (h) emotions, respectively, in the brain function for pain modulation. In each, the operational definition, cortical processing, neuroimaging, and pain modulation were systematically deliberated. However, not all studies had featured the brain modulation processing but rather demonstrated potential effects on human pain. In our own studies on the emotional modulation on human pain, we observed that emotions could be induced from music melodies or pictures perception for reduction of tonic human pain, mainly in potentiation of the posterior alpha EEG fields, likely resulted from underneath activities of precuneous in regulation of consciousness, including pain perception. To sum, higher brain functions become the leading edge research in all sciences. How to solve the information bit of thinking and feeling in the brain can be the greatest challenge of human intelligence. Application of higher cortical modulation of human pain and suffering can lead to the progress of social humanity and civilization.

  5. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  6. Oscillatory Cortical Network Involved in Auditory Verbal Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    van Lutterveld, R.; Hillebrand, A.; Diederen, K.M.J.; Daalman, K.; Kahn, R.S.; Stam, C.J.; Sommer, I.E.C.

    2012-01-01

    Background: Auditory verbal hallucinations (AVH), a prominent symptom of schizophrenia, are often highly distressing for patients. Better understanding of the pathogenesis of hallucinations could increase therapeutic options. Magnetoencephalography (MEG) provides direct measures of neuronal activity

  7. Hearing loss alters serotonergic modulation of intrinsic excitability in auditory cortex.

    Science.gov (United States)

    Rao, Deepti; Basura, Gregory J; Roche, Joseph; Daniels, Scott; Mancilla, Jaime G; Manis, Paul B

    2010-11-01

    Sensorineural hearing loss during early childhood alters auditory cortical evoked potentials in humans and profoundly changes auditory processing in hearing-impaired animals. Multiple mechanisms underlie the early postnatal establishment of cortical circuits, but one important set of developmental mechanisms relies on the neuromodulator serotonin (5-hydroxytryptamine [5-HT]). On the other hand, early sensory activity may also regulate the establishment of adultlike 5-HT receptor expression and function. We examined the role of 5-HT in auditory cortex by first investigating how 5-HT neurotransmission and 5-HT(2) receptors influence the intrinsic excitability of layer II/III pyramidal neurons in brain slices of primary auditory cortex (A1). A brief application of 5-HT (50 μM) transiently and reversibly decreased firing rates, input resistance, and spike rate adaptation in normal postnatal day 12 (P12) to P21 rats. Compared with sham-operated animals, cochlear ablation increased excitability at P12-P21, but all the effects of 5-HT, except for the decrease in adaptation, were eliminated in both sham-operated and cochlear-ablated rats. At P30-P35, cochlear ablation did not increase intrinsic excitability compared with shams, but it did prevent a pronounced decrease in excitability that appeared 10 min after 5-HT application. We also tested whether the effects on excitability were mediated by 5-HT(2) receptors. In the presence of the 5-HT(2)-receptor antagonist, ketanserin, 5-HT significantly decreased excitability compared with 5-HT or ketanserin alone in both sham-operated and cochlear-ablated P12-P21 rats. However, at P30-P35, ketanserin had no effect in sham-operated and only a modest effect cochlear-ablated animals. The 5-HT(2)-specific agonist 5-methoxy-N,N-dimethyltryptamine also had no effect at P12-P21. These results suggest that 5-HT likely regulates pyramidal cell excitability via multiple receptor subtypes with opposing effects. These data also show that

  8. K -shell decomposition reveals hierarchical cortical organization of the human brain

    International Nuclear Information System (INIS)

    Lahav, Nir; Ksherim, Baruch; Havlin, Shlomo; Ben-Simon, Eti; Maron-Katz, Adi; Cohen, Reuven

    2016-01-01

    In recent years numerous attempts to understand the human brain were undertaken from a network point of view. A network framework takes into account the relationships between the different parts of the system and enables to examine how global and complex functions might emerge from network topology. Previous work revealed that the human brain features ‘small world’ characteristics and that cortical hubs tend to interconnect among themselves. However, in order to fully understand the topological structure of hubs, and how their profile reflect the brain’s global functional organization, one needs to go beyond the properties of a specific hub and examine the various structural layers that make up the network. To address this topic further, we applied an analysis known in statistical physics and network theory as k-shell decomposition analysis. The analysis was applied on a human cortical network, derived from MRI/DSI data of six participants. Such analysis enables us to portray a detailed account of cortical connectivity focusing on different neighborhoods of inter-connected layers across the cortex. Our findings reveal that the human cortex is highly connected and efficient, and unlike the internet network contains no isolated nodes. The cortical network is comprised of a nucleus alongside shells of increasing connectivity that formed one connected giant component, revealing the human brain’s global functional organization. All these components were further categorized into three hierarchies in accordance with their connectivity profile, with each hierarchy reflecting different functional roles. Such a model may explain an efficient flow of information from the lowest hierarchy to the highest one, with each step enabling increased data integration. At the top, the highest hierarchy (the nucleus) serves as a global interconnected collective and demonstrates high correlation with consciousness related regions, suggesting that the nucleus might serve as a

  9. Modulation of Specific Sensory Cortical Areas by Segregated Basal Forebrain Cholinergic Neurons Demonstrated by Neuronal Tracing and Optogenetic Stimulation in Mice.

    Science.gov (United States)

    Chaves-Coira, Irene; Barros-Zulaica, Natali; Rodrigo-Angulo, Margarita; Núñez, Ángel

    2016-01-01

    Neocortical cholinergic activity plays a fundamental role in sensory processing and cognitive functions. Previous results have suggested a refined anatomical and functional topographical organization of basal forebrain (BF) projections that may control cortical sensory processing in a specific manner. We have used retrograde anatomical procedures to demonstrate the existence of specific neuronal groups in the BF involved in the control of specific sensory cortices. Fluoro-Gold (FlGo) and Fast Blue (FB) fluorescent retrograde tracers were deposited into the primary somatosensory (S1) and primary auditory (A1) cortices in mice. Our results revealed that the BF is a heterogeneous area in which neurons projecting to different cortical areas are segregated into different neuronal groups. Most of the neurons located in the horizontal limb of the diagonal band of Broca (HDB) projected to the S1 cortex, indicating that this area is specialized in the sensory processing of tactile stimuli. However, the nucleus basalis magnocellularis (B) nucleus shows a similar number of cells projecting to the S1 as to the A1 cortices. In addition, we analyzed the cholinergic effects on the S1 and A1 cortical sensory responses by optogenetic stimulation of the BF neurons in urethane-anesthetized transgenic mice. We used transgenic mice expressing the light-activated cation channel, channelrhodopsin-2, tagged with a fluorescent protein (ChR2-YFP) under the control of the choline-acetyl transferase promoter (ChAT). Cortical evoked potentials were induced by whisker deflections or by auditory clicks. According to the anatomical results, optogenetic HDB stimulation induced more extensive facilitation of tactile evoked potentials in S1 than auditory evoked potentials in A1, while optogenetic stimulation of the B nucleus facilitated either tactile or auditory evoked potentials equally. Consequently, our results suggest that cholinergic projections to the cortex are organized into segregated

  10. Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.

    Science.gov (United States)

    Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne

    2015-09-21

    Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Deep brain stimulation of the amygdala alleviates fear conditioning-induced alterations in synaptic plasticity in the cortical-amygdala pathway and fear memory.

    Science.gov (United States)

    Sui, Li; Huang, SiJia; Peng, BinBin; Ren, Jie; Tian, FuYing; Wang, Yan

    2014-07-01

    Deep brain stimulation (DBS) of the amygdala has been demonstrated to modulate hyperactivity of the amygdala, which is responsible for the symptoms of post-traumatic stress disorder (PTSD), and thus might be used for the treatment of PTSD. However, the underlying mechanism of DBS of the amygdala in the modulation of the amygdala is unclear. The present study investigated the effects of DBS of the amygdala on synaptic transmission and synaptic plasticity at cortical inputs to the amygdala, which is critical for the formation and storage of auditory fear memories, and fear memories. The results demonstrated that auditory fear conditioning increased single-pulse-evoked field excitatory postsynaptic potentials in the cortical-amygdala pathway. Furthermore, auditory fear conditioning decreased the induction of paired-pulse facilitation and long-term potentiation, two neurophysiological models for studying short-term and long-term synaptic plasticity, respectively, in the cortical-amygdala pathway. In addition, all these auditory fear conditioning-induced changes could be reversed by DBS of the amygdala. DBS of the amygdala also rescued auditory fear conditioning-induced enhancement of long-term retention of fear memory. These findings suggested that DBS of the amygdala alleviating fear conditioning-induced alterations in synaptic plasticity in the cortical-amygdala pathway and fear memory may underlie the neuromodulatory role of DBS of the amygdala in activities of the amygdala.

  12. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    Science.gov (United States)

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  13. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  14. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy.

    Science.gov (United States)

    Koelsch, Stefan; Skouras, Stavros; Lohmann, Gabriele

    2018-01-01

    Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with "small-world" properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex-and sensory systems in general-in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions.

  15. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  16. Human oocyte cryopreservation and the fate of cortical granules.

    Science.gov (United States)

    Ghetler, Yehudith; Skutelsky, Ehud; Ben Nun, Isaac; Ben Dor, Liah; Amihai, Dina; Shalgi, Ruth

    2006-07-01

    To examine the effect of the commonly used oocyte cryopreservation protocol on the cortical granules (CGs) of human immature germinal vesicle (GV) and mature metaphase II (MII) oocytes. Laboratory study. IVF unit. Unfertilized, intracytoplasmic sperm injected (ICSI) oocytes, and immature oocytes were cryopreserved using a slow freezing-rapid thawing program with 1,2-propanediol (PROH) as a cryoprotectant. Cortical granule exocytosis (CGE) was assessed by either confocal microscopy or transmission electron microscopy (TEM). The survival rates of frozen-thawed oocytes (mature and immature) were significantly lower compared with zygotes. Both mature and immature oocytes exhibited increased fluorescence after cryopreservation, indicating the occurrence of CGE. Mere exposure of oocytes to cryoprotectants induced CGE of 70% the value of control zygotes. The TEM revealed a drastic reduction in the amount of CGs at the cortex of frozen-thawed GV and MII oocytes, as well as appearance of vesicles in the ooplasm. The commonly used PROH freezing protocol for human oocytes resulted in extensive CGE. This finding explains why ICSI is needed to achieve fertilization of frozen-thawed human oocytes.

  17. Potential use of MEG to understand abnormalities in auditory function in clinical populations

    Directory of Open Access Journals (Sweden)

    Eric eLarson

    2014-03-01

    Full Text Available Magnetoencephalography (MEG provides a direct, non-invasive view of neural activity with millisecond temporal precision. Recent developments in MEG analysis allow for improved source localization and mapping of connectivity between brain regions, expanding the possibilities for using MEG as a diagnostic tool. In this paper, we first describe inverse imaging methods (e.g., minimum-norm estimation and functional connectivity measures, and how they can provide insights into cortical processing. We then offer a perspective on how these techniques could be used to understand and evaluate auditory pathologies that often manifest during development. Here we focus specifically on how MEG inverse imaging, by providing anatomically-based interpretation of neural activity, may allow us to test which aspects of cortical processing play a role in (central auditory processing disorder ([C]APD. Appropriately combining auditory paradigms with MEG analysis could eventually prove useful for a hypothesis-driven understanding and diagnosis of (CAPD or other disorders, as well as the evaluation of the effectiveness of intervention strategies.

  18. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    in listeners with SNHL, it is likely that HI listeners rely on the enhanced envelope cues to retrieve the pitch of unresolved harmonics. Hence, the relative importance of pitch cues may be altered in HI listeners, whereby envelope cues may be used instead of TFS cues to obtain a similar performance in pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...

  19. Auditory pathways and processes: implications for neuropsychological assessment and diagnosis of children and adolescents.

    Science.gov (United States)

    Bailey, Teresa

    2010-01-01

    Neuroscience research on auditory processing pathways and their behavioral and electrophysiological correlates has taken place largely outside the field of clinical neuropsychology. Deviations and disruptions in auditory pathways in children and adolescents result in a well-documented range of developmental and learning impairments frequently referred for neuropsychological evaluation. This review is an introduction to research from the last decade. It describes auditory cortical and subcortical pathways and processes and relates recent research to specific conditions and questions neuropsychologists commonly encounter. Auditory processing disorders' comorbidity with ADHD and language-based disorders and research addressing the challenges of assessment and differential diagnosis are discussed.

  20. Pitch perception prior to cortical maturation

    Science.gov (United States)

    Lau, Bonnie K.

    Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.

  1. Memory Reactivation during Rapid Eye Movement Sleep Promotes Its Generalization and Integration in Cortical Stores

    Science.gov (United States)

    Sterpenich, Virginie; Schmidt, Christina; Albouy, Geneviève; Matarazzo, Luca; Vanhaudenhuyse, Audrey; Boveroux, Pierre; Degueldre, Christian; Leclercq, Yves; Balteau, Evelyne; Collette, Fabienne; Luxen, André; Phillips, Christophe; Maquet, Pierre

    2014-01-01

    Study Objectives: Memory reactivation appears to be a fundamental process in memory consolidation. In this study we tested the influence of memory reactivation during rapid eye movement (REM) sleep on memory performance and brain responses at retrieval in healthy human participants. Participants: Fifty-six healthy subjects (28 women and 28 men, age [mean ± standard deviation]: 21.6 ± 2.2 y) participated in this functional magnetic resonance imaging (fMRI) study. Methods and Results: Auditory cues were associated with pictures of faces during their encoding. These memory cues delivered during REM sleep enhanced subsequent accurate recollections but also false recognitions. These results suggest that reactivated memories interacted with semantically related representations, and induced new creative associations, which subsequently reduced the distinction between new and previously encoded exemplars. Cues had no effect if presented during stage 2 sleep, or if they were not associated with faces during encoding. Functional magnetic resonance imaging revealed that following exposure to conditioned cues during REM sleep, responses to faces during retrieval were enhanced both in a visual area and in a cortical region of multisensory (auditory-visual) convergence. Conclusions: These results show that reactivating memories during REM sleep enhances cortical responses during retrieval, suggesting the integration of recent memories within cortical circuits, favoring the generalization and schematization of the information. Citation: Sterpenich V, Schmidt C, Albouy G, Matarazzo L, Vanhaudenhuyse A, Boveroux P, Degueldre C, Leclercq Y, Balteau E, Collette F, Luxen A, Phillips C, Maquet P. Memory reactivation during rapid eye movement sleep promotes its generalization and integration in cortical stores. SLEEP 2014;37(6):1061-1075. PMID:24882901

  2. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  3. Cortical oscillations and entrainment in speech processing during working memory load

    DEFF Research Database (Denmark)

    Hjortkjær, Jens; Märcher-Rørsted, Jonatan; Fuglsang, Søren A

    2018-01-01

    Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we...... developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task....... The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels...

  4. Automatic phoneme category selectivity in the dorsal auditory stream.

    Science.gov (United States)

    Chevillet, Mark A; Jiang, Xiong; Rauschecker, Josef P; Riesenhuber, Maximilian

    2013-03-20

    Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.

  5. Cell-specific gain modulation by synaptically released zinc in cortical circuits of audition.

    Science.gov (United States)

    Anderson, Charles T; Kumar, Manoj; Xiong, Shanshan; Tzounopoulos, Thanos

    2017-09-09

    In many excitatory synapses, mobile zinc is found within glutamatergic vesicles and is coreleased with glutamate. Ex vivo studies established that synaptically released (synaptic) zinc inhibits excitatory neurotransmission at lower frequencies of synaptic activity but enhances steady state synaptic responses during higher frequencies of activity. However, it remains unknown how synaptic zinc affects neuronal processing in vivo. Here, we imaged the sound-evoked neuronal activity of the primary auditory cortex in awake mice. We discovered that synaptic zinc enhanced the gain of sound-evoked responses in CaMKII-expressing principal neurons, but it reduced the gain of parvalbumin- and somatostatin-expressing interneurons. This modulation was sound intensity-dependent and, in part, NMDA receptor-independent. By establishing a previously unknown link between synaptic zinc and gain control of auditory cortical processing, our findings advance understanding about cortical synaptic mechanisms and create a new framework for approaching and interpreting the role of the auditory cortex in sound processing.

  6. Recording human cortical population spikes non-invasively--An EEG tutorial.

    Science.gov (United States)

    Waterstraat, Gunnar; Fedele, Tommaso; Burghoff, Martin; Scheer, Hans-Jürgen; Curio, Gabriel

    2015-07-30

    Non-invasively recorded somatosensory high-frequency oscillations (sHFOs) evoked by electric nerve stimulation are markers of human cortical population spikes. Previously, their analysis was based on massive averaging of EEG responses. Advanced neurotechnology and optimized off-line analysis can enhance the signal-to-noise ratio of sHFOs, eventually enabling single-trial analysis. The rationale for developing dedicated low-noise EEG technology for sHFOs is unfolded. Detailed recording procedures and tailored analysis principles are explained step-by-step. Source codes in Matlab and Python are provided as supplementary material online. Combining synergistic hardware and analysis improvements, evoked sHFOs at around 600 Hz ('σ-bursts') can be studied in single-trials. Additionally, optimized spatial filters increase the signal-to-noise ratio of components at about 1 kHz ('κ-bursts') enabling their detection in non-invasive surface EEG. sHFOs offer a unique possibility to record evoked human cortical population spikes non-invasively. The experimental approaches and algorithms presented here enable also non-specialized EEG laboratories to combine measurements of conventional low-frequency EEG with the analysis of concomitant cortical population spike responses. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Modulation-Frequency-Specific Adaptation in Awake Auditory Cortex

    Science.gov (United States)

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A.; Schreiner, Christoph E.

    2015-01-01

    Amplitude modulations are fundamental features of natural signals, including human speech and nonhuman primate vocalizations. Because natural signals frequently occur in the context of other competing signals, we used a forward-masking paradigm to investigate how the modulation context of a prior signal affects cortical responses to subsequent modulated sounds. Psychophysical “modulation masking,” in which the presentation of a modulated “masker” signal elevates the threshold for detecting the modulation of a subsequent stimulus, has been interpreted as evidence of a central modulation filterbank and modeled accordingly. Whether cortical modulation tuning is compatible with such models remains unknown. By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels. In contrast, modulation context had little effect on the synchrony of the cortical representation of the second SAM stimuli and the tuning of such effects did not match that observed for firing rate. Our results suggest that, although the temporal representation of modulated signals is more robust to changes in stimulus context than representations based on average firing rate, this representation is not fully exploited and psychophysical modulation masking more closely mirrors physiological rate suppression and that rate tuning for a given stimulus feature in a given neuron's signal pathway appears sufficient to engender context-sensitive cortical adaptation. PMID:25878263

  8. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  9. Attentional modulation of auditory steady-state responses.

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  10. Mapping human brain networks with cortico-cortical evoked potentials

    Science.gov (United States)

    Keller, Corey J.; Honey, Christopher J.; Mégevand, Pierre; Entz, Laszlo; Ulbert, Istvan; Mehta, Ashesh D.

    2014-01-01

    The cerebral cortex forms a sheet of neurons organized into a network of interconnected modules that is highly expanded in humans and presumably enables our most refined sensory and cognitive abilities. The links of this network form a fundamental aspect of its organization, and a great deal of research is focusing on understanding how information flows within and between different regions. However, an often-overlooked element of this connectivity regards a causal, hierarchical structure of regions, whereby certain nodes of the cortical network may exert greater influence over the others. While this is difficult to ascertain non-invasively, patients undergoing invasive electrode monitoring for epilepsy provide a unique window into this aspect of cortical organization. In this review, we highlight the potential for cortico-cortical evoked potential (CCEP) mapping to directly measure neuronal propagation across large-scale brain networks with spatio-temporal resolution that is superior to traditional neuroimaging methods. We first introduce effective connectivity and discuss the mechanisms underlying CCEP generation. Next, we highlight how CCEP mapping has begun to provide insight into the neural basis of non-invasive imaging signals. Finally, we present a novel approach to perturbing and measuring brain network function during cognitive processing. The direct measurement of CCEPs in response to electrical stimulation represents a potentially powerful clinical and basic science tool for probing the large-scale networks of the human cerebral cortex. PMID:25180306

  11. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Goycoolea, Marcos; Mena, Ismael; Neubauer, Sonia

    2004-01-01

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  12. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review

    Directory of Open Access Journals (Sweden)

    Marie-Josée Castellanos

    2016-12-01

    Full Text Available Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  13. Nonlinear dynamics of cortical responses to color in the human cVEP.

    Science.gov (United States)

    Nunez, Valerie; Shapley, Robert M; Gordon, James

    2017-09-01

    The main finding of this paper is that the human visual cortex responds in a very nonlinear manner to the color contrast of pure color patterns. We examined human cortical responses to color checkerboard patterns at many color contrasts, measuring the chromatic visual evoked potential (cVEP) with a dense electrode array. Cortical topography of the cVEPs showed that they were localized near the posterior electrode at position Oz, indicating that the primary cortex (V1) was the major source of responses. The choice of fine spatial patterns as stimuli caused the cVEP response to be driven by double-opponent neurons in V1. The cVEP waveform revealed nonlinear color signal processing in the V1 cortex. The cVEP time-to-peak decreased and the waveform's shape was markedly narrower with increasing cone contrast. Comparison of the linear dynamics of retinal and lateral geniculate nucleus responses with the nonlinear dynamics of the cortical cVEP indicated that the nonlinear dynamics originated in the V1 cortex. The nature of the nonlinearity is a kind of automatic gain control that adjusts cortical dynamics to be faster when color contrast is greater.

  14. Cortical Network Dynamics of Perceptual Decision-Making in the Human Brain

    Directory of Open Access Journals (Sweden)

    Markus eSiegel

    2011-02-01

    Full Text Available Goal-directed behavior requires the flexible transformation of sensory evidence about our environment into motor actions. Studies of perceptual decision-making have shown that this transformation is distributed across several widely separated brain regions. Yet, little is known about how decision-making emerges from the dynamic interactions among these regions. Here, we review a series of studies, in which we characterized the cortical network interactions underlying a perceptual decision process in the human brain. We used magnetoencephalography (MEG to measure the large-scale cortical population dynamics underlying each of the sub-processes involved in this decision: the encoding of sensory evidence and action plan, the mapping between the two, and the attentional selection of task-relevant evidence. We found that these sub-processes are mediated by neuronal oscillations within specific frequency ranges. Localized gamma-band oscillations in sensory and motor cortices reflect the encoding of the sensory evidence and motor plan. Large-scale oscillations across widespread cortical networks mediate the integrative processes connecting these local networks: Gamma- and beta-band oscillations across frontal, parietal and sensory cortices serve the selection of relevant sensory evidence and its flexible mapping onto action plans. In sum, our results suggest that perceptual decisions are mediated by oscillatory interactions within overlapping local and large-scale cortical networks.

  15. Human cortical areas involved in perception of surface glossiness.

    Science.gov (United States)

    Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi

    2014-09-01

    Glossiness is the visual appearance of an object's surface as defined by its surface reflectance properties. Despite its ecological importance, little is known about the neural substrates underlying its perception. In this study, we performed the first human neuroimaging experiments that directly investigated where the processing of glossiness resides in the visual cortex. First, we investigated the cortical regions that were more activated by observing high glossiness compared with low glossiness, where the effects of simple luminance and luminance contrast were dissociated by controlling the illumination conditions (Experiment 1). As cortical regions that may be related to the processing of glossiness, V2, V3, hV4, VO-1, VO-2, collateral sulcus (CoS), LO-1, and V3A/B were identified, which also showed significant correlation with the perceived level of glossiness. This result is consistent with the recent monkey studies that identified selective neural response to glossiness in the ventral visual pathway, except for V3A/B in the dorsal visual pathway, whose involvement in the processing of glossiness could be specific to the human visual system. Second, we investigated the cortical regions that were modulated by selective attention to glossiness (Experiment 2). The visual areas that showed higher activation to attention to glossiness than that to either form or orientation were identified as right hV4, right VO-2, and right V3A/B, which were commonly identified in Experiment 1. The results indicate that these commonly identified visual areas in the human visual cortex may play important roles in glossiness perception. Copyright © 2014. Published by Elsevier Inc.

  16. Cortical responses to salient nociceptive and not nociceptive stimuli in vegetative and minimal conscious state

    Science.gov (United States)

    de Tommaso, Marina; Navarro, Jorge; Lanzillotti, Crocifissa; Ricci, Katia; Buonocunto, Francesca; Livrea, Paolo; Lancioni, Giulio E.

    2015-01-01

    Aims: Questions regarding perception of pain in non-communicating patients and the management of pain continue to raise controversy both at a clinical and ethical level. The aim of this study was to examine the cortical response to salient visual, acoustic, somatosensory electric non-nociceptive and nociceptive laser stimuli and their correlation with the clinical evaluation. Methods: Five Vegetative State (VS), 4 Minimally Conscious State (MCS) patients and 11 age- and sex-matched controls were examined. Evoked responses were obtained by 64 scalp electrodes, while delivering auditory, visual, non-noxious electrical and noxious laser stimulation, which were randomly presented every 10 s. Laser, somatosensory, auditory and visual evoked responses were identified as a negative-positive (N2-P2) vertex complex in the 500 ms post-stimulus time. We used Nociception Coma Scale-Revised (NCS-R) and Coma Recovery Scale (CRS-R) for clinical evaluation of pain perception and consciousness impairment. Results: The laser evoked potentials (LEPs) were recognizable in all cases. Only one MCS patient showed a reliable cortical response to all the employed stimulus modalities. One VS patient did not present cortical responses to any other stimulus modality. In the remaining participants, auditory, visual and electrical related potentials were inconstantly present. Significant N2 and P2 latency prolongation occurred in both VS and MCS patients. The presence of a reliable cortical response to auditory, visual and electric stimuli was able to correctly classify VS and MCS patients with 90% accuracy. Laser P2 and N2 amplitudes were not correlated with the CRS-R and NCS-R scores, while auditory and electric related potentials amplitude were associated with the motor response to pain and consciousness recovery. Discussion: pain arousal may be a primary function also in vegetative state patients while the relevance of other stimulus modalities may indicate the degree of cognitive and motor

  17. An evoked auditory response fMRI study of the effects of rTMS on putative AVH pathways in healthy volunteers.

    LENUS (Irish Health Repository)

    Tracy, D K

    2010-01-01

    Auditory verbal hallucinations (AVH) are the most prevalent symptom in schizophrenia. They are associated with increased activation within the temporoparietal cortices and are refractory to pharmacological and psychological treatment in approximately 25% of patients. Low frequency repetitive transcranial magnetic stimulation (rTMS) over the temporoparietal cortex has been demonstrated to be effective in reducing AVH in some patients, although results have varied. The cortical mechanism by which rTMS exerts its effects remain unknown, although data from the motor system is suggestive of a local cortical inhibitory effect. We explored neuroimaging differences in healthy volunteers between application of a clinically utilized rTMS protocol and a sham rTMS equivalent when undertaking a prosodic auditory task.

  18. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  19. Frequency-specific attentional modulation in human primary auditory cortex and midbrain.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-07-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  1. Searching for the optimal stimulus eliciting auditory brainstem responses in humans

    DEFF Research Database (Denmark)

    Fobel, Oliver; Dau, Torsten

    2004-01-01

    -chirp, was based on estimates of human basilar membrane (BM) group delays derived from stimulus-frequency otoacoustic emissions (SFOAE) at a sound pressure level of 40 dB [Shera and Guinan, in Recent Developments in Auditory Mechanics (2000)]. The other chirp, referred to as the A-chirp, was derived from latency...

  2. Congenital Deafness Reduces, But Does Not Eliminate Auditory Responsiveness in Cat Extrastriate Visual Cortex.

    Science.gov (United States)

    Land, Rüdiger; Radecke, Jan-Ole; Kral, Andrej

    2018-04-01

    Congenital deafness not only affects the development of the auditory cortex, but also the interrelation between the visual and auditory system. For example, congenital deafness leads to visual modulation of the deaf auditory cortex in the form of cross-modal plasticity. Here we asked, whether congenital deafness additionally affects auditory modulation in the visual cortex. We demonstrate that auditory activity, which is normally present in the lateral suprasylvian visual areas in normal hearing cats, can also be elicited by electrical activation of the auditory system with cochlear implants. We then show that in adult congenitally deaf cats auditory activity in this region was reduced when tested with cochlear implant stimulation. However, the change in this area was small and auditory activity was not completely abolished despite years of congenital deafness. The results document that congenital deafness leads not only to changes in the auditory cortex but also affects auditory modulation of visual areas. However, the results further show a persistence of fundamental cortical sensory functional organization despite congenital deafness. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  4. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  5. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  6. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  7. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  8. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  9. Short- and long-term habituation of auditory event-related potentials in the rat [v1; ref status: indexed, http://f1000r.es/1l3

    Directory of Open Access Journals (Sweden)

    Kestutis Gurevicius

    2013-09-01

    Full Text Available An auditory oddball paradigm in humans generates a long-duration cortical negative potential, often referred to as mismatch negativity. Similar negativity has been documented in monkeys and cats, but it is controversial whether mismatch negativity also exists in awake rodents. To this end, we recorded cortical and hippocampal evoked responses in rats during alert immobility under a typical passive oddball paradigm that yields mismatch negativity in humans. The standard stimulus was a 9 kHz tone and the deviant either 7 or 11 kHz tone in the first condition. We found no evidence of a sustained potential shift when comparing evoked responses to standard and deviant stimuli. Instead, we found repetition-induced attenuation of the P60 component of the combined evoked response in the cortex, but not in the hippocampus. The attenuation extended over three days of recording and disappeared after 20 intervening days of rest. Reversal of the standard and deviant tones resulted is a robust enhancement of the N40 component not only in the cortex but also in the hippocampus. Responses to standard and deviant stimuli were affected similarly. Finally, we tested the effect of scopolamine in this paradigm. Scopolamine attenuated cortical N40 and P60 as well as hippocampal P60 components, but had no specific effect on the deviant response. We conclude that in an oddball paradigm the rat demonstrates repetition-induced attenuation of mid-latency responses, which resembles attenuation of the N1-component of human auditory evoked potential, but no mismatch negativity.

  10. Competing sound sources reveal spatial effects in cortical processing.

    Directory of Open Access Journals (Sweden)

    Ross K Maddox

    Full Text Available Why is spatial tuning in auditory cortex weak, even though location is important to object recognition in natural settings? This question continues to vex neuroscientists focused on linking physiological results to auditory perception. Here we show that the spatial locations of simultaneous, competing sound sources dramatically influence how well neural spike trains recorded from the zebra finch field L (an analog of mammalian primary auditory cortex encode source identity. We find that the location of a birdsong played in quiet has little effect on the fidelity of the neural encoding of the song. However, when the song is presented along with a masker, spatial effects are pronounced. For each spatial configuration, a subset of neurons encodes song identity more robustly than others. As a result, competing sources from different locations dominate responses of different neural subpopulations, helping to separate neural responses into independent representations. These results help elucidate how cortical processing exploits spatial information to provide a substrate for selective spatial auditory attention.

  11. An RNA gene expressed during cortical development evolved rapidly in humans

    DEFF Research Database (Denmark)

    Pollard, Katherine S; Salama, Sofie R; Lambert, Nelle

    2006-01-01

    in the developing human neocortex from 7 to 19 gestational weeks, a crucial period for cortical neuron specification and migration. HAR1F is co-expressed with reelin, a product of Cajal-Retzius neurons that is of fundamental importance in specifying the six-layer structure of the human cortex. HAR1 and the other...

  12. Using modern human cortical bone distribution to test the systemic robusticity hypothesis.

    Science.gov (United States)

    Baab, Karen L; Copes, Lynn E; Ward, Devin L; Wells, Nora; Grine, Frederick E

    2018-06-01

    The systemic robusticity hypothesis links the thickness of cortical bone in both the cranium and limb bones. This hypothesis posits that thick cortical bone is in part a systemic response to circulating hormones, such as growth hormone and thyroid hormone, possibly related to physical activity or cold climates. Although this hypothesis has gained popular traction, only rarely has robusticity of the cranium and postcranial skeleton been considered jointly. We acquired computed tomographic scans from associated crania, femora and humeri from single individuals representing 11 populations in Africa and North America (n = 228). Cortical thickness in the parietal, frontal and occipital bones and cortical bone area in limb bone diaphyses were analyzed using correlation, multiple regression and general linear models to test the hypothesis. Absolute thickness values from the crania were not correlated with cortical bone area of the femur or humerus, which is at odds with the systemic robusticity hypothesis. However, measures of cortical bone scaled by total vault thickness and limb cross-sectional area were positively correlated between the cranium and postcranium. When accounting for a range of potential confounding variables, including sex, age and body mass, variation in relative postcranial cortical bone area explained ∼20% of variation in the proportion of cortical cranial bone thickness. While these findings provide limited support for the systemic robusticity hypothesis, cranial cortical thickness did not track climate or physical activity across populations. Thus, some of the variation in cranial cortical bone thickness in modern humans is attributable to systemic effects, but the driving force behind this effect remains obscure. Moreover, neither absolute nor proportional measures of cranial cortical bone thickness are positively correlated with total cranial bone thickness, complicating the extrapolation of these findings to extinct species where only cranial

  13. Sensitivity of cortical auditory evoked potential detection for hearing-impaired infants in response to short speech sounds

    Directory of Open Access Journals (Sweden)

    Bram Van Dun

    2012-01-01

    Full Text Available

    Background: Cortical auditory evoked potentials (CAEPs are an emerging tool for hearing aid fitting evaluation in young children who cannot provide reliable behavioral feedback. It is therefore useful to determine the relationship between the sensation level of speech sounds and the detection sensitivity of CAEPs.

    Design and methods: Twenty-five sensorineurally hearing impaired infants with an age range of 8 to 30 months were tested once, 18 aided and 7 unaided. First, behavioral thresholds of speech stimuli /m/, /g/, and /t/ were determined using visual reinforcement orientation audiometry (VROA. Afterwards, the same speech stimuli were presented at 55, 65, and 75 dB SPL, and CAEP recordings were made. An automatic statistical detection paradigm was used for CAEP detection.

    Results: For sensation levels above 0, 10, and 20 dB respectively, detection sensitivities were equal to 72 ± 10, 75 ± 10, and 78 ± 12%. In 79% of the cases, automatic detection p-values became smaller when the sensation level was increased by 10 dB.

    Conclusions: The results of this study suggest that the presence or absence of CAEPs can provide some indication of the audibility of a speech sound for infants with sensorineural hearing loss. The detection of a CAEP provides confidence, to a degree commensurate with the detection probability, that the infant is detecting that sound at the level presented. When testing infants where the audibility of speech sounds has not been established behaviorally, the lack of a cortical response indicates the possibility, but by no means a certainty, that the sensation level is 10 dB or less.

  14. Attentional Modulation of Auditory Steady-State Responses

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex. PMID:25334021

  15. Attentional modulation of auditory steady-state responses.

    Directory of Open Access Journals (Sweden)

    Yatin Mahajan

    Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  16. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  18. Effects of musical training on the auditory cortex in children.

    Science.gov (United States)

    Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E

    2003-11-01

    Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.

  19. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    Science.gov (United States)

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  20. The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction.

    Science.gov (United States)

    Makropoulos, Antonios; Robinson, Emma C; Schuh, Andreas; Wright, Robert; Fitzgibbon, Sean; Bozek, Jelena; Counsell, Serena J; Steinweg, Johannes; Vecchiato, Katy; Passerat-Palmbach, Jonathan; Lenz, Gregor; Mortari, Filippo; Tenev, Tencho; Duff, Eugene P; Bastiani, Matteo; Cordero-Grande, Lucilio; Hughes, Emer; Tusor, Nora; Tournier, Jacques-Donald; Hutter, Jana; Price, Anthony N; Teixeira, Rui Pedro A G; Murgasova, Maria; Victor, Suresh; Kelly, Christopher; Rutherford, Mary A; Smith, Stephen M; Edwards, A David; Hajnal, Joseph V; Jenkinson, Mark; Rueckert, Daniel

    2018-06-01

    The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Using auditory steady state responses to outline the functional connectivity in the tinnitus brain.

    Directory of Open Access Journals (Sweden)

    Winfried Schlee

    Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.

  2. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    Science.gov (United States)

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  3. Specialization of the auditory system for the processing of bio-sonar information in the frequency domain: Mustached bats.

    Science.gov (United States)

    Suga, Nobuo

    2018-04-01

    For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Age-Associated Reduction of Asymmetry in Human Central Auditory Function: A 1H-Magnetic Resonance Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Xianming Chen

    2013-01-01

    Full Text Available The aim of this study was to investigate the effects of age on hemispheric asymmetry in the auditory cortex after pure tone stimulation. Ten young and 8 older healthy volunteers took part in this study. Two-dimensional multivoxel 1H-magnetic resonance spectroscopy scans were performed before and after stimulation. The ratios of N-acetylaspartate (NAA, glutamate/glutamine (Glx, and γ-amino butyric acid (GABA to creatine (Cr were determined and compared between the two groups. The distribution of metabolites between the left and right auditory cortex was also determined. Before stimulation, left and right side NAA/Cr and right side GABA/Cr were significantly lower, whereas right side Glx/Cr was significantly higher in the older group compared with the young group. After stimulation, left and right side NAA/Cr and GABA/Cr were significantly lower, whereas left side Glx/Cr was significantly higher in the older group compared with the young group. There was obvious asymmetry in right side Glx/Cr and left side GABA/Cr after stimulation in young group, but not in older group. In summary, there is marked hemispheric asymmetry in auditory cortical metabolites following pure tone stimulation in young, but not older adults. This reduced asymmetry in older adults may at least in part underlie the speech perception difficulties/presbycusis experienced by aging adults.

  5. The effects of aging on lifetime of auditory sensory memory in humans.

    Science.gov (United States)

    Cheng, Chia-Hsiung; Lin, Yung-Yang

    2012-02-01

    The amplitude change of cortical responses to repeated stimulation with respect to different interstimulus intervals (ISIs) is considered as an index of sensory memory. To determine the effect of aging on lifetime of auditory sensory memory, N100m responses were recorded in young, middle-aged, and elderly healthy volunteers (n=15 for each group). Trains of 5 successive tones were presented with an inter-train interval of 10 s. In separate sessions, the within-train ISIs were 0.5, 1, 2, 4, and 8 s. The amplitude ratio between N100m responses to the first and fifth stimuli (S5/S1 N100m ratio) within each ISI condition was obtained to reflect the recovery cycle profile. The recovery function time constant (τ) was smaller in the elderly (1.06±0.26 s, psensory memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Differential Receptive Field Properties of Parvalbumin and Somatostatin Inhibitory Neurons in Mouse Auditory Cortex.

    Science.gov (United States)

    Li, Ling-Yun; Xiong, Xiaorui R; Ibrahim, Leena A; Yuan, Wei; Tao, Huizhong W; Zhang, Li I

    2015-07-01

    Cortical inhibitory circuits play important roles in shaping sensory processing. In auditory cortex, however, functional properties of genetically identified inhibitory neurons are poorly characterized. By two-photon imaging-guided recordings, we specifically targeted 2 major types of cortical inhibitory neuron, parvalbumin (PV) and somatostatin (SOM) expressing neurons, in superficial layers of mouse auditory cortex. We found that PV cells exhibited broader tonal receptive fields with lower intensity thresholds and stronger tone-evoked spike responses compared with SOM neurons. The latter exhibited similar frequency selectivity as excitatory neurons. The broader/weaker frequency tuning of PV neurons was attributed to a broader range of synaptic inputs and stronger subthreshold responses elicited, which resulted in a higher efficiency in the conversion of input to output. In addition, onsets of both the input and spike responses of SOM neurons were significantly delayed compared with PV and excitatory cells. Our results suggest that PV and SOM neurons engage in auditory cortical circuits in different manners: while PV neurons may provide broadly tuned feedforward inhibition for a rapid control of ascending inputs to excitatory neurons, the delayed and more selective inhibition from SOM neurons may provide a specific modulation of feedback inputs on their distal dendrites. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  8. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    Science.gov (United States)

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  9. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.

    Science.gov (United States)

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W

    2011-03-08

    How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.

  10. Cortical oscillations and entrainment in speech processing during working memory load.

    Science.gov (United States)

    Hjortkjaer, Jens; Märcher-Rørsted, Jonatan; Fuglsang, Søren A; Dau, Torsten

    2018-02-02

    Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels of background noise. Increasing WM load at higher n-back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n-back level. The observed alpha-theta power changes are consistent with visual n-back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low-frequency cortical activity (level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top-down influence of WM processing on cortical speech entrainment. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  12. Persistent spatial working memory deficits in rats with bilateral cortical microgyria

    Directory of Open Access Journals (Sweden)

    Rosen Glenn D

    2008-10-01

    Full Text Available Abstract Background Anomalies of cortical neuronal migration (e.g., microgyria (MG and/or ectopias are associated with a variety of language and cognitive deficits in human populations. In rodents, postnatal focal freezing lesions lead to the formation of cortical microgyria similar to those seen in human dyslexic brains, and also cause subsequent deficits in rapid auditory processing similar to those reported in human language impaired populations. Thus convergent findings support the ongoing study of disruptions in neuronal migration in rats as a putative model to provide insight on human language disability. Since deficits in working memory using both verbal and non-verbal tasks also characterize dyslexic populations, the present study examined the effects of neonatally induced bilateral cortical microgyria (MG on working memory in adult male rats. Methods A delayed match-to-sample radial water maze task, in which the goal arm was altered among eight locations on a daily basis, was used to assess working memory performance in MG (n = 8 and sham (n = 10 littermates. Results Over a period of 60 sessions of testing (each session comprising one pre-delay sample trial, and one post-delay test trial, all rats showed learning as evidenced by a significant decrease in overall test errors. However, MG rats made significantly more errors than shams during initial testing, and this memory deficit was still evident after 60 days (12 weeks of testing. Analyses performed on daily error patterns showed that over the course of testing, MG rats utilized a strategy similar to shams (but with less effectiveness, as indicated by more errors. Conclusion These results indicate persistent abnormalities in the spatial working memory system in rats with induced disruptions of neocortical neuronal migration.

  13. Contralateral white noise selectively changes left human auditory cortex activity in a lexical decision task.

    Science.gov (United States)

    Behne, Nicole; Wendt, Beate; Scheich, Henning; Brechmann, André

    2006-04-01

    In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.

  14. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Automatic detection of frequency changes depends on auditory stimulus intensity.

    Science.gov (United States)

    Salo, S; Lang, A H; Aaltonen, O; Lertola, K; Kärki, T

    1999-06-01

    A cortical cognitive auditory evoked potential, mismatch negativity (MMN), reflects automatic discrimination and echoic memory functions of the auditory system. For this study, we examined whether this potential is dependent on the stimulus intensity. The MMN potentials were recorded from 10 subjects with normal hearing using a sine tone of 1000 Hz as the standard stimulus and a sine tone of 1141 Hz as the deviant stimulus, with probabilities of 90% and 10%, respectively. The intensities were 40, 50, 60, 70, and 80 dB HL for both standard and deviant stimuli in separate blocks. Stimulus intensity had a statistically significant effect on the mean amplitude, rise time parameter, and onset latency of the MMN. Automatic auditory discrimination seems to be dependent on the sound pressure level of the stimuli.

  16. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Directory of Open Access Journals (Sweden)

    Liliane Aparecida Fagundes Silva

    2015-01-01

    Full Text Available The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP; speech perception tests of the Glendonald Auditory Screening Procedure (GASP; Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS; and Meaningful Use of Speech Scales (MUSS. The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms. In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms. The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI.

  17. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile

    2015-01-01

    The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163

  18. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  19. Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex.

    Science.gov (United States)

    Schwartz, Zachary P; David, Stephen V

    2018-01-01

    Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1. © The Author 2017. Published by Oxford University Press.

  20. Stereotactically-guided Ablation of the Rat Auditory Cortex, and Localization of the Lesion in the Brain.

    Science.gov (United States)

    Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A

    2017-10-11

    The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.

  1. Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.

    Science.gov (United States)

    Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun

    2018-01-01

    When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.

  2. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex.

    Science.gov (United States)

    Henry, Molly J; Obleser, Jonas

    2013-01-01

    Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.

  3. Non-linear laws of echoic memory and auditory change detection in humans.

    Science.gov (United States)

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-07-03

    The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  4. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  5. Location coding by opponent neural populations in the auditory cortex.

    Directory of Open Access Journals (Sweden)

    G Christopher Stecker

    2005-03-01

    Full Text Available Although the auditory cortex plays a necessary role in sound localization, physiological investigations in the cortex reveal inhomogeneous sampling of auditory space that is difficult to reconcile with localization behavior under the assumption of local spatial coding. Most neurons respond maximally to sounds located far to the left or right side, with few neurons tuned to the frontal midline. Paradoxically, psychophysical studies show optimal spatial acuity across the frontal midline. In this paper, we revisit the problem of inhomogeneous spatial sampling in three fields of cat auditory cortex. In each field, we confirm that neural responses tend to be greatest for lateral positions, but show the greatest modulation for near-midline source locations. Moreover, identification of source locations based on cortical responses shows sharp discrimination of left from right but relatively inaccurate discrimination of locations within each half of space. Motivated by these findings, we explore an opponent-process theory in which sound-source locations are represented by differences in the activity of two broadly tuned channels formed by contra- and ipsilaterally preferring neurons. Finally, we demonstrate a simple model, based on spike-count differences across cortical populations, that provides bias-free, level-invariant localization-and thus also a solution to the "binding problem" of associating spatial information with other nonspatial attributes of sounds.

  6. Organizing Principles of Human Cortical Development--Thickness and Area from 4 to 30 Years: Insights from Comparative Primate Neuroanatomy.

    Science.gov (United States)

    Amlien, Inge K; Fjell, Anders M; Tamnes, Christian K; Grydeland, Håkon; Krogsrud, Stine K; Chaplin, Tristan A; Rosa, Marcello G P; Walhovd, Kristine B

    2016-01-01

    The human cerebral cortex undergoes a protracted, regionally heterogeneous development well into young adulthood. Cortical areas that expand the most during human development correspond to those that differ most markedly when the brains of macaque monkeys and humans are compared. However, it remains unclear to what extent this relationship derives from allometric scaling laws that apply to primate brains in general, or represents unique evolutionary adaptations. Furthermore, it is unknown whether the relationship only applies to surface area (SA), or also holds for cortical thickness (CT). In 331 participants aged 4 to 30, we calculated age functions of SA and CT, and examined the correspondence of human cortical development with macaque to human expansion, and with expansion across nonhuman primates. CT followed a linear negative age function from 4 to 30 years, while SA showed positive age functions until 12 years with little further development. Differential cortical expansion across primates was related to regional maturation of SA and CT, with age trajectories differing between high- and low-expanding cortical regions. This relationship adhered to allometric scaling laws rather than representing uniquely macaque-human differences: regional correspondence with human development was as large for expansion across nonhuman primates as between humans and macaque. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Science.gov (United States)

    Engell, Alva; Junghöfer, Markus; Stein, Alwina; Lau, Pia; Wunderlich, Robert; Wollbrink, Andreas; Pantev, Christo

    2016-01-01

    Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  8. Envelope enhancement increases cortical sensitivity to interaural envelope delays with acoustic and electric hearing.

    Directory of Open Access Journals (Sweden)

    Douglas E H Hartley

    Full Text Available Evidence from human psychophysical and animal electrophysiological studies suggests that sensitivity to interaural time delay (ITD in the modulating envelope of a high-frequency carrier can be enhanced using half-wave rectified stimuli. Recent evidence has shown potential benefits of equivalent electrical stimuli to deaf individuals with bilateral cochlear implants (CIs. In the current study we assessed the effects of envelope shape on ITD sensitivity in the primary auditory cortex of normal-hearing ferrets, and profoundly-deaf animals with bilateral CIs. In normal-hearing animals, cortical sensitivity to ITDs (±1 ms in 0.1-ms steps was assessed in response to dichotically-presented i sinusoidal amplitude-modulated (SAM and ii half-wave rectified (HWR tones (100-ms duration; 70 dB SPL presented at the best-frequency of the unit over a range of modulation frequencies. In separate experiments, adult ferrets were deafened with neomycin administration and bilaterally-implanted with intra-cochlear electrode arrays. Electrically-evoked auditory brainstem responses (EABRs were recorded in response to bipolar electrical stimulation of the apical pair of electrodes with singe biphasic current pulses (40 µs per phase over a range of current levels to measure hearing thresholds. Subsequently, we recorded cortical sensitivity to ITDs (±800 µs in 80-µs steps within the envelope of SAM and HWR biphasic-pulse trains (40 µs per phase; 6000 pulses per second, 100-ms duration over a range of modulation frequencies. In normal-hearing animals, nearly a third of cortical neurons were sensitive to envelope-ITDs in response to SAM tones. In deaf animals with bilateral CI, the proportion of ITD-sensitive cortical neurons was approximately a fifth in response to SAM pulse trains. In normal-hearing and deaf animals with bilateral CI the proportion of ITD sensitive units and neural sensitivity to ITDs increased in response to HWR, compared with SAM stimuli

  9. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  10. Auditory Tones and Foot-Shock Recapitulate Spontaneous Sub-Threshold Activity in Basolateral Amygdala Principal Neurons and Interneurons.

    Directory of Open Access Journals (Sweden)

    François Windels

    Full Text Available In quiescent states such as anesthesia and slow wave sleep, cortical networks show slow rhythmic synchronized activity. In sensory cortices this rhythmic activity shows a stereotypical pattern that is recapitulated by stimulation of the appropriate sensory modality. The amygdala receives sensory input from a variety of sources, and in anesthetized animals, neurons in the basolateral amygdala (BLA show slow rhythmic synchronized activity. Extracellular field potential recordings show that these oscillations are synchronized with sensory cortex and the thalamus, with both the thalamus and cortex leading the BLA. Using whole-cell recording in vivo we show that the membrane potential of principal neurons spontaneously oscillates between up- and down-states. Footshock and auditory stimulation delivered during down-states evokes an up-state that fully recapitulates those occurring spontaneously. These results suggest that neurons in the BLA receive convergent input from networks of cortical neurons with slow oscillatory activity and that somatosensory and auditory stimulation can trigger activity in these same networks.

  11. Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.

    Science.gov (United States)

    Hazell, J W; Jastreboff, P J

    1990-02-01

    A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.

  12. Pitch-Responsive Cortical Regions in Congenital Amusia.

    Science.gov (United States)

    Norman-Haignere, Sam V; Albouy, Philippe; Caclin, Anne; McDermott, Josh H; Kanwisher, Nancy G; Tillmann, Barbara

    2016-03-09

    Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work. Copyright © 2016 the authors 0270-6474/16/362986-09$15.00/0.

  13. Analysis of the volumetric relationship among human ocular, orbital and fronto-occipital cortical morphology

    Science.gov (United States)

    Masters, Michael; Bruner, Emiliano; Queer, Sarah; Traynor, Sarah; Senjem, Jess

    2015-01-01

    Recent research on the visual system has focused on investigating the relationship among eye (ocular), orbital, and visual cortical anatomy in humans. This issue is relevant in evolutionary and medical fields. In terms of evolution, only in modern humans and Neandertals are the orbits positioned beneath the frontal lobes, with consequent structural constraints. In terms of medicine, such constraints can be associated with minor deformation of the eye, vision defects, and patterns of integration among these features, and in association with the frontal lobes, are important to consider in reconstructive surgery. Further study is therefore necessary to establish how these variables are related, and to what extent ocular size is associated with orbital and cerebral cortical volumes. Relationships among these anatomical components were investigated using magnetic resonance images from a large sample of 83 individuals, which also included each subject’s body height, age, sex, and uncorrected visual acuity score. Occipital and frontal gyri volumes were calculated using two different cortical parcellation tools in order to provide a better understanding of how the eye and orbit vary in relation to visual cortical gyri, and frontal cortical gyri which are not directly related to visual processing. Results indicated that ocular and orbital volumes were weakly correlated, and that eye volume explains only a small proportion of the variance in orbital volume. Ocular and orbital volumes were also found to be equally and, in most cases, more highly correlated with five frontal lobe gyri than with occipital lobe gyri associated with V1, V2, and V3 of the visual cortex. Additionally, after accounting for age and sex variation, the relationship between ocular and total visual cortical volume was no longer statistically significant, but remained significantly related to total frontal lobe volume. The relationship between orbital and visual cortical volumes remained significant for

  14. Analysis of the volumetric relationship among human ocular, orbital and fronto-occipital cortical morphology.

    Science.gov (United States)

    Masters, Michael; Bruner, Emiliano; Queer, Sarah; Traynor, Sarah; Senjem, Jess

    2015-10-01

    Recent research on the visual system has focused on investigating the relationship among eye (ocular), orbital, and visual cortical anatomy in humans. This issue is relevant in evolutionary and medical fields. In terms of evolution, only in modern humans and Neandertals are the orbits positioned beneath the frontal lobes, with consequent structural constraints. In terms of medicine, such constraints can be associated with minor deformation of the eye, vision defects, and patterns of integration among these features, and in association with the frontal lobes, are important to consider in reconstructive surgery. Further study is therefore necessary to establish how these variables are related, and to what extent ocular size is associated with orbital and cerebral cortical volumes. Relationships among these anatomical components were investigated using magnetic resonance images from a large sample of 83 individuals, which also included each subject's body height, age, sex, and uncorrected visual acuity score. Occipital and frontal gyri volumes were calculated using two different cortical parcellation tools in order to provide a better understanding of how the eye and orbit vary in relation to visual cortical gyri, and frontal cortical gyri which are not directly related to visual processing. Results indicated that ocular and orbital volumes were weakly correlated, and that eye volume explains only a small proportion of the variance in orbital volume. Ocular and orbital volumes were also found to be equally and, in most cases, more highly correlated with five frontal lobe gyri than with occipital lobe gyri associated with V1, V2, and V3 of the visual cortex. Additionally, after accounting for age and sex variation, the relationship between ocular and total visual cortical volume was no longer statistically significant, but remained significantly related to total frontal lobe volume. The relationship between orbital and visual cortical volumes remained significant for a

  15. Ventilatory response to induced auditory arousals during NREM sleep.

    Science.gov (United States)

    Badr, M S; Morgan, B J; Finn, L; Toiber, F S; Crabtree, D C; Puleo, D S; Skatrud, J B

    1997-09-01

    Sleep state instability is a potential mechanism of central apnea/hypopnea during non-rapid eye movement (NREM) sleep. To investigate this postulate, we induced brief arousals by delivering transient (0.5 second) auditory stimuli during stable NREM sleep in eight normal subjects. Arousal was determined according to American Sleep Disorders Association (ASDA) criteria. A total of 96 trials were conducted; 59 resulted in cortical arousal and 37 did not result in arousal. In trials associated with arousal, minute ventilation (VE) increased from 5.1 +/- 1.24 minutes to 7.5 +/- 2.24 minutes on the first posttone breath (p = 0.001). However, no subsequent hypopnea or apnea occurred as VE decreased gradually to 4.8 +/- 1.5 l/minute (p > 0.05) on the fifth posttone breath. Trials without arousal did not result in hyperpnea on the first breath nor subsequent hypopnea. We conclude that 1) auditory stimulation resulted in transient hyperpnea only if associated with cortical arousal; 2) hypopnea or apnea did not occur following arousal-induced hyperpnea in normal subjects; 3) interaction with fluctuating chemical stimuli or upper airway resistance may be required for arousals to cause sleep-disordered breathing.

  16. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Evidence for cortical structural plasticity in humans after a day of waking and sleep deprivation.

    Science.gov (United States)

    Elvsåshagen, Torbjørn; Zak, Nathalia; Norbom, Linn B; Pedersen, Per Ø; Quraishi, Sophia H; Bjørnerud, Atle; Alnæs, Dag; Doan, Nhat Trung; Malt, Ulrik F; Groote, Inge R; Westlye, Lars T

    2017-08-01

    Sleep is an evolutionarily conserved process required for human health and functioning. Insufficient sleep causes impairments across cognitive domains, and sleep deprivation can have rapid antidepressive effects in mood disorders. However, the neurobiological effects of waking and sleep are not well understood. Recently, animal studies indicated that waking and sleep are associated with substantial cortical structural plasticity. Here, we hypothesized that structural plasticity can be observed after a day of waking and sleep deprivation in the human cerebral cortex. To test this hypothesis, 61 healthy adult males underwent structural magnetic resonance imaging (MRI) at three time points: in the morning after a regular night's sleep, the evening of the same day, and the next morning, either after total sleep deprivation (N=41) or a night of sleep (N=20). We found significantly increased right prefrontal cortical thickness from morning to evening across all participants. In addition, pairwise comparisons in the deprived group between the two morning scans showed significant thinning of mainly bilateral medial parietal cortices after 23h of sleep deprivation, including the precuneus and posterior cingulate cortex. However, there were no significant group (sleep vs. sleep deprived group) by time interactions and we can therefore not rule out that other mechanisms than sleep deprivation per se underlie the bilateral medial parietal cortical thinning observed in the deprived group. Nonetheless, these cortices are thought to subserve wakefulness, are among the brain regions with highest metabolic rate during wake, and are considered some of the most sensitive cortical regions to a variety of insults. Furthermore, greater thinning within the left medial parietal cluster was associated with increased sleepiness after sleep deprivation. Together, these findings add to a growing body of data showing rapid structural plasticity within the human cerebral cortex detectable with

  19. Strain differences of the effect of enucleation and anophthalmia on the size and growth of sensory cortices in mice.

    Science.gov (United States)

    Massé, Ian O; Guillemette, Sonia; Laramée, Marie-Eve; Bronchti, Gilles; Boire, Denis

    2014-11-07

    Anophthalmia is a condition in which the eye does not develop from the early embryonic period. Early blindness induces cross-modal plastic modifications in the brain such as auditory and haptic activations of the visual cortex and also leads to a greater solicitation of the somatosensory and auditory cortices. The visual cortex is activated by auditory stimuli in anophthalmic mice and activity is known to alter the growth pattern of the cerebral cortex. The size of the primary visual, auditory and somatosensory cortices and of the corresponding specific sensory thalamic nuclei were measured in intact and enucleated C57Bl/6J mice and in ZRDCT anophthalmic mice (ZRDCT/An) to evaluate the contribution of cross-modal activity on the growth of the cerebral cortex. In addition, the size of these structures were compared in intact, enucleated and anophthalmic fourth generation backcrossed hybrid C57Bl/6J×ZRDCT/An mice to parse out the effects of mouse strains and of the different visual deprivations. The visual cortex was smaller in the anophthalmic ZRDCT/An than in the intact and enucleated C57Bl/6J mice. Also the auditory cortex was larger and the somatosensory cortex smaller in the ZRDCT/An than in the intact and enucleated C57Bl/6J mice. The size differences of sensory cortices between the enucleated and anophthalmic mice were no longer present in the hybrid mice, showing specific genetic differences between C57Bl/6J and ZRDCT mice. The post natal size increase of the visual cortex was less in the enucleated than in the anophthalmic and intact hybrid mice. This suggests differences in the activity of the visual cortex between enucleated and anophthalmic mice and that early in-utero spontaneous neural activity in the visual system contributes to the shaping of functional properties of cortical networks. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Acute cortical deafness in a child with MELAS syndrome.

    Science.gov (United States)

    Pittet, Marie P; Idan, Roni B; Kern, Ilse; Guinand, Nils; Van, Hélène Cao; Toso, Seema; Fluss, Joël

    2016-05-01

    Auditory impairment in mitochondrial disorders are usually due to peripheral sensorineural dysfunction. Central deafness is only rarely reported. We report here an 11-year-old boy with MELAS syndrome who presented with subacute deafness after waking up from sleep. Peripheral hearing loss was rapidly excluded. A brain MRI documented bilateral stroke-like lesions predominantly affecting the superior temporal lobe, including the primary auditory cortex, confirming the central nature of deafness. Slow recovery was observed in the following weeks. This case serves to illustrate the numerous challenges caused by MELAS and the unusual occurrence of acute cortical deafness, that to our knowledge has not be described so far in a child in this setting.

  1. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Alva Engell

    Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  2. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  3. Functional connectivity-based parcellation and connectome of cortical midline structures in the mouse: a perfusion autoradiography study.

    Science.gov (United States)

    Holschneider, Daniel P; Wang, Zhuo; Pang, Raina D

    2014-01-01

    Rodent cortical midline structures (CMS) are involved in emotional, cognitive and attentional processes. Tract tracing has revealed complex patterns of structural connectivity demonstrating connectivity-based integration and segregation for the prelimbic, cingulate area 1, retrosplenial dysgranular cortices dorsally, and infralimbic, cingulate area 2, and retrosplenial granular cortices ventrally. Understanding of CMS functional connectivity (FC) remains more limited. Here we present the first subregion-level FC analysis of the mouse CMS, and assess whether fear results in state-dependent FC changes analogous to what has been reported in humans. Brain mapping using [(14)C]-iodoantipyrine was performed in mice during auditory-cued fear conditioned recall and in controls. Regional cerebral blood flow (CBF) was analyzed in 3-D images reconstructed from brain autoradiographs. Regions-of-interest were selected along the CMS anterior-posterior and dorsal-ventral axes. In controls, pairwise correlation and graph theoretical analyses showed strong FC within each CMS structure, strong FC along the dorsal-ventral axis, with segregation of anterior from posterior structures. Seed correlation showed FC of anterior regions to limbic/paralimbic areas, and FC of posterior regions to sensory areas-findings consistent with functional segregation noted in humans. Fear recall increased FC between the cingulate and retrosplenial cortices, but decreased FC between dorsal and ventral structures. In agreement with reports in humans, fear recall broadened FC of anterior structures to the amygdala and to somatosensory areas, suggesting integration and processing of both limbic and sensory information. Organizational principles learned from animal models at the mesoscopic level (brain regions and pathways) will not only critically inform future work at the microscopic (single neurons and synapses) level, but also have translational value to advance our understanding of human brain

  4. Functional connectivity-based parcellation and connectome of cortical midline structures in the mouse: a perfusion autoradiography study

    Directory of Open Access Journals (Sweden)

    Daniel P Holschneider

    2014-06-01

    Full Text Available Rodent cortical midline structures (CMS are involved in emotional, cognitive and attentional processes. Tract tracing has revealed complex patterns of structural connectivity demonstrating connectivity-based integration and segregation for the prelimbic, cingulate area 1, retrosplenial dysgranular cortices dorsally, and infralimbic, cingulate area 2, and retrosplenial granular cortices ventrally. Understanding of CMS functional connectivity (FC remains more limited. Here we present the first subregion-level FC analysis of the mouse CMS, and assess whether fear results in state-dependent FC changes analogous to what has been reported in humans. Brain mapping using [14C]-iodoantipyrine was performed in mice during auditory-cued fear conditioned recall and in controls. Regional cerebral blood flow was analyzed in 3-D images reconstructed from brain autoradiographs. Regions-of-interest were selected along the CMS anterior-posterior and dorsal-ventral axes. In controls, pairwise correlation and graph theoretical analyses showed strong FC within each CMS structure, strong FC along the dorsal-ventral axis, with segregation of anterior from posterior structures. Seed correlation showed FC of anterior regions to limbic/paralimbic areas, and FC of posterior regions to sensory areas--findings consistent with functional segregation noted in humans. Fear recall increased FC between the cingulate and retrosplenial cortices, but decreased FC between dorsal and ventral structures. In agreement with reports in humans, fear recall broadened FC of anterior structures to the amygdala and to somatosensory areas, suggesting integration and processing of both limbic and sensory information. Organizational principles learned from animal models at the mesoscopic level (brain regions and pathways will not only critically inform future work at the microscopic (single neurons and synapses level, but also have translational value to advance our understanding of human brain

  5. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  6. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  7. Attention effects at auditory periphery derived from human scalp potentials: displacement measure of potentials.

    Science.gov (United States)

    Ikeda, Kazunari; Hayashi, Akiko; Sekiguchi, Takahiro; Era, Shukichi

    2006-10-01

    It is known in humans that electrophysiological measures such as the auditory brainstem response (ABR) are difficult to identify the attention effect at the auditory periphery, whereas the centrifugal effect has been detected by measuring otoacoustic emissions. This research developed a measure responsive to the shift of human scalp potentials within a brief post-stimulus period (13 ms), that is, displacement percentage, and applied it to an experiment to retrieve the peripheral attention effect. In the present experimental paradigm, tone pips were exposed to the left ear whereas the other ear was masked by white noise. Twelve participants each conducted two conditions of either ignoring or attending to the tone pips. Relative to averaged scalp potentials in the ignoring condition, the shift of the potentials was found within early component range during the attentive condition, and displacement percentage then revealed a significant magnitude difference between the two conditions. These results suggest that, using a measure representing the potential shift itself, the peripheral effect of attention can be detected from human scalp potentials.

  8. Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2016-02-01

    Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  9. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.

    Science.gov (United States)

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  10. Recruitment of the auditory cortex in congenitally deaf cats by long-term cochlear electrostimulation.

    Science.gov (United States)

    Klinke, R; Kral, A; Heid, S; Tillein, J; Hartmann, R

    1999-09-10

    In congenitally deaf cats, the central auditory system is deprived of acoustic input because of degeneration of the organ of Corti before the onset of hearing. Primary auditory afferents survive and can be stimulated electrically. By means of an intracochlear implant and an accompanying sound processor, congenitally deaf kittens were exposed to sounds and conditioned to respond to tones. After months of exposure to meaningful stimuli, the cortical activity in chronically implanted cats produced field potentials of higher amplitudes, expanded in area, developed long latency responses indicative of intracortical information processing, and showed more synaptic efficacy than in naïve, unstimulated deaf cats. The activity established by auditory experience resembles activity in hearing animals.

  11. Early musical training is linked to gray matter structure in the ventral premotor cortex and auditory-motor rhythm synchronization performance.

    Science.gov (United States)

    Bailey, Jennifer Anne; Zatorre, Robert J; Penhune, Virginia B

    2014-04-01

    Evidence in animals and humans indicates that there are sensitive periods during development, times when experience or stimulation has a greater influence on behavior and brain structure. Sensitive periods are the result of an interaction between maturational processes and experience-dependent plasticity mechanisms. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 show enhancements in behavior and white matter structure compared with those who begin later. Plastic changes in white matter and gray matter are hypothesized to co-occur; therefore, the current study investigated possible differences in gray matter structure between early-trained (ET; 7) musicians, matched for years of experience. Gray matter structure was assessed using voxel-wise analysis techniques (optimized voxel-based morphometry, traditional voxel-based morphometry, and deformation-based morphometry) and surface-based measures (cortical thickness, surface area and mean curvature). Deformation-based morphometry analyses identified group differences between ET and LT musicians in right ventral premotor cortex (vPMC), which correlated with performance on an auditory motor synchronization task and with age of onset of musical training. In addition, cortical surface area in vPMC was greater for ET musicians. These results are consistent with evidence that premotor cortex shows greatest maturational change between the ages of 6-9 years and that this region is important for integrating auditory and motor information. We propose that the auditory and motor interactions required by musical practice drive plasticity in vPMC and that this plasticity is greatest when maturation is near its peak.

  12. Dissociable Changes of Frontal and Parietal Cortices in Inherent Functional Flexibility across the Human Life Span.

    Science.gov (United States)

    Yin, Dazhi; Liu, Wenjing; Zeljic, Kristina; Wang, Zhiwei; Lv, Qian; Fan, Mingxia; Cheng, Wenhong; Wang, Zheng

    2016-09-28

    Extensive evidence suggests that frontoparietal regions can dynamically update their pattern of functional connectivity, supporting cognitive control and adaptive implementation of task demands. However, it is largely unknown whether this flexibly functional reconfiguration is intrinsic and occurs even in the absence of overt tasks. Based on recent advances in dynamics of resting-state functional resonance imaging (fMRI), we propose a probabilistic framework in which dynamic reconfiguration of intrinsic functional connectivity between each brain region and others can be represented as a probability distribution. A complexity measurement (i.e., entropy) was used to quantify functional flexibility, which characterizes heterogeneous connectivity between a particular region and others over time. Following this framework, we identified both functionally flexible and specialized regions over the human life span (112 healthy subjects from 13 to 76 years old). Across brainwide regions, we found regions showing high flexibility mainly in the higher-order association cortex, such as the lateral prefrontal cortex (LPFC), lateral parietal cortex, and lateral temporal lobules. In contrast, visual, auditory, and sensory areas exhibited low flexibility. Furthermore, we observed that flexibility of the right LPFC improved during maturation and reduced due to normal aging, with the opposite occurring for the left lateral parietal cortex. Our findings reveal dissociable changes of frontal and parietal cortices over the life span in terms of inherent functional flexibility. This study not only provides a new framework to quantify the spatiotemporal behavior of spontaneous brain activity, but also sheds light on the organizational principle behind changes in brain function across the human life span. Recent neuroscientific research has demonstrated that the human capability of adaptive task control is primarily the result of the flexible operation of frontal brain networks. However

  13. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  14. Automatic segmentation of human cortical layer-complexes and architectural areas using diffusion MRI and its validation

    Directory of Open Access Journals (Sweden)

    Matteo Bastiani

    2016-11-01

    Full Text Available Recently, several magnetic resonance imaging contrast mechanisms have been shown to distinguish cortical substructure corresponding to selected cortical layers. Here, we investigate cortical layer and area differentiation by automatized unsupervised clustering of high resolution diffusion MRI data. Several groups of adjacent layers could be distinguished in human primary motor and premotor cortex. We then used the signature of diffusion MRI signals along cortical depth as a criterion to detect area boundaries and find borders at which the signature changes abruptly. We validate our clustering results by histological analysis of the same tissue. These results confirm earlier studies which show that diffusion MRI can probe layer-specific intracortical fiber organization and, moreover, suggests that it contains enough information to automatically classify architecturally distinct cortical areas. We discuss the strengths and weaknesses of the automatic clustering approach and its appeal for MR-based cortical histology.

  15. Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans.

    Science.gov (United States)

    Marks, Kendra L; Martel, David T; Wu, Calvin; Basura, Gregory J; Roberts, Larry E; Schvartz-Leyzac, Kara C; Shore, Susan E

    2018-01-03

    The dorsal cochlear nucleus is the first site of multisensory convergence in mammalian auditory pathways. Principal output neurons, the fusiform cells, integrate auditory nerve inputs from the cochlea with somatosensory inputs from the head and neck. In previous work, we developed a guinea pig model of tinnitus induced by noise exposure and showed that the fusiform cells in these animals exhibited increased spontaneous activity and cross-unit synchrony, which are physiological correlates of tinnitus. We delivered repeated bimodal auditory-somatosensory stimulation to the dorsal cochlear nucleus of guinea pigs with tinnitus, choosing a stimulus interval known to induce long-term depression (LTD). Twenty minutes per day of LTD-inducing bimodal (but not unimodal) stimulation reduced physiological and behavioral evidence of tinnitus in the guinea pigs after 25 days. Next, we applied the same bimodal treatment to 20 human subjects with tinnitus using a double-blinded, sham-controlled, crossover study. Twenty-eight days of LTD-inducing bimodal stimulation reduced tinnitus loudness and intrusiveness. Unimodal auditory stimulation did not deliver either benefit. Bimodal auditory-somatosensory stimulation that induces LTD in the dorsal cochlear nucleus may hold promise for suppressing chronic tinnitus, which reduces quality of life for millions of tinnitus sufferers worldwide. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  16. Transmodal comparison of auditory, motor, and visual post-processing with and without intentional short-term memory maintenance.

    Science.gov (United States)

    Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias

    2010-12-01

    To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2018-02-15

    The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In

  18. Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex

    Science.gov (United States)

    Smiley, John F.; Schroeder, Charles E.

    2017-01-01

    Prior studies have reported “local” field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be “contaminated” by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such “far-field” activity, in addition to, or in absence of, local synaptic responses. SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is

  19. Level-Dependent Nonlinear Hearing Protector Model in the Auditory Hazard Assessment Algorithm for Humans

    Science.gov (United States)

    2015-04-01

    HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with

  20. Neuroimaging in human MDMA (Ecstasy) users: A cortical model

    Science.gov (United States)

    Cowan, Ronald L; Roberts, Deanne M; Joers, James M

    2009-01-01

    MDMA (3,4 methylenedioxymethamphetamine) has been used by millions of people worldwide as a recreational drug. MDMA and Ecstasy are often used synonymously but it is important to note that the purity of Ecstasy sold as MDMA is not certain. MDMA use is of public health concern, not so much because MDMA produces a common or severe dependence syndrome, but rather because rodent and non-human primate studies have indicated that MDMA (when administered at certain dosages and intervals) can cause long-lasting reductions in markers of brain serotonin (5-HT) that appear specific to fine diameter axons arising largely from the dorsal raphe nucleus (DR). Given the popularity of MDMA, the potential for the drug to produce long-lasting or permanent 5-HT axon damage or loss, and the widespread role of 5-HT function in the brain, there is a great need for a better understanding of brain function in human users of this drug. To this end, neuropsychological, neuroendocrine, and neuroimaging studies have all suggested that human MDMA users may have long-lasting changes in brain function consistent with 5-HT toxicity. Data from animal models leads to testable hypotheses regarding MDMA effects on the human brain. Because neuropsychological and neuroimaging findings have focused on the neocortex, a cortical model is developed to provide context for designing and interpreting neuroimaging studies in MDMA users. Aspects of the model are supported by the available neuroimaging data but there are controversial findings in some areas and most findings have not been replicated across different laboratories and using different modalities. This paper reviews existing findings in the context of a cortical model and suggests directions for future research. PMID:18991874

  1. Higher cortical modulation of pain perception in the human brain: Psychological determinant

    OpenAIRE

    Chen, Andrew Cn

    2009-01-01

    Pain perception and its genesis in the human brain have been reviewed recently. In the current article, the reports on pain modulation in the human brain were reviewed from higher cortical regulation, i.e. top-down effect, particularly studied in psychological determinants. Pain modulation can be examined by gene therapy, physical modulation, pharmacological modulation, psychological modulation, and pathophysiological modulation. In psychological modulation, this article examined (a) willed d...

  2. Oscillatory Hierarchy Controlling Cortical Excitability and Stimulus Integration

    Science.gov (United States)

    Shah, A. S.; Lakatos, P.; McGinnis, T.; O'Connell, N.; Mills, A.; Knuth, K. H.; Chen, C.; Karmos, G.; Schroeder, C. E.

    2004-01-01

    Cortical gamma band oscillations have been recorded in sensory cortices of cats and monkeys, and are thought to aid in perceptual binding. Gamma activity has also been recorded in the rat hippocampus and entorhinal cortex, where it has been shown, that field gamma power is modulated at theta frequency. Since the power of gamma activity in the sensory cortices is not constant (gamma-bursts). we decided to examine the relationship between gamma power and the phase of low frequency oscillation in the auditory cortex of the awake macaque. Macaque monkeys were surgically prepared for chronic awake electrophysiological recording. During the time of the experiments. linear array multielectrodes were inserted in area AI to obtain laminar current source density (CSD) and multiunit activity profiles. Instantaneous theta and gamma power and phase was extracted by applying the Morlet wavelet transformation to the CSD. Gamma power was averaged for every 1 degree of low frequency oscillations to calculate power-phase relation. Both gamma and theta-delta power are largest in the supragranular layers. Power modulation of gamma activity is phase locked to spontaneous, as well as stimulus-related local theta and delta field oscillations. Our analysis also revealed that the power of theta oscillations is always largest at a certain phase of delta oscillation. Auditory stimuli produce evoked responses in the theta band (Le., there is pre- to post-stimulus addition of theta power), but there is also indication that stimuli may cause partial phase re-setting of spontaneous delta (and thus also theta and gamma) oscillations. We also show that spontaneous oscillations might play a role in the processing of incoming sensory signals by 'preparing' the cortex.

  3. Hypothesis-driven methods to augment human cognition by optimizing cortical oscillations

    Directory of Open Access Journals (Sweden)

    Jörn M. Horschig

    2014-06-01

    Full Text Available Cortical oscillations have been shown to represent fundamental functions of a working brain, e.g. communication, stimulus binding, error monitoring, and inhibition, and are directly linked to behavior. Recent studies intervening with these oscillations have demonstrated effective modulation of both the oscillations and behavior. In this review, we collect evidence in favor of how hypothesis-driven methods can be used to augment cognition by optimizing cortical oscillations. We elaborate their potential usefulness for three target groups: healthy elderly, patients with attention deficit/hyperactivity disorder, and healthy young adults. We discuss the relevance of neuronal oscillations in each group and show how each of them can benefit from the manipulation of functionally-related oscillations. Further, we describe methods for manipulation of neuronal oscillations including direct brain stimulation as well as indirect task alterations. We also discuss practical considerations about the proposed techniques. In conclusion, we propose that insights from neuroscience should guide techniques to augment human cognition, which in turn can provide a better understanding of how the human brain works.

  4. Cortical control of gait in healthy humans: an fMRI study

    International Nuclear Information System (INIS)

    ChiHong, Wang; YauYau, Wai; BoCheng, Kuo; Yei-Yu, Yeh; JiunJie Wang

    2008-01-01

    This study examined the cortical control of gait in healthy humans using functional magnetic resonance imaging (fMRI). Two block-designed fMRI sessions were conducted during motor imagery of a locomotor-related task. Subjects watched a video clip that showed an actor standing and walking in an egocentric perspective. In a control session, additional fMRI images were collected when participants observed a video clip of the clutch movement of a right hand. In keeping with previous studies using SPECT and NIRS, we detected activation in many motor-related areas including supplementary motor area, bilateral precentral gyrus, left dorsal premotor cortex, and cingulate motor area. Smaller additional activations were observed in the bilateral precuneus, left thalamus, and part of right putamen. Based on these findings, we propose a novel paradigm to study the cortical control of gait in healthy humans using fMRI. Specifically, the task used in this study - involving both mirror neurons and mental imagery - provides a new feasible model to be used in functional neuroimaging studies in this area of research. (author)

  5. New perspectives on the auditory cortex: learning and memory.

    Science.gov (United States)

    Weinberger, Norman M

    2015-01-01

    Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.

  6. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.

  7. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    Science.gov (United States)

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  8. Cortical interneurons from human pluripotent stem cells: prospects for neurological and psychiatric disease

    Directory of Open Access Journals (Sweden)

    Charles Edward Arber

    2013-03-01

    Full Text Available Cortical interneurons represent 20% of the cells in the cortex. These cells are local inhibitory neurons whose function is to modulate the firing activities of the excitatory projection neurons. Cortical interneuron dysfunction is believed to lead to runaway excitation underlying (or implicated in seizure-based diseases, such as epilepsy, autism and schizophrenia. The complex development of this cell type and the intricacies involved in defining the relative subtypes are being increasingly well defined. This has led to exciting experimental cell therapy in model organisms, whereby fetal-derived interneuron precursors can reverse seizure severity and reduce mortality in adult epileptic rodents. These proof-of-principle studies raise hope for potential interneuron-based transplantation therapies for treating epilepsy. On the other hand, cortical neurons generated from patient iPSCs serve as a valuable tool to explore genetic influences of interneuron development and function. This is a fundamental step in enhancing our understanding of the molecular basis of neuropsychiatric illnesses and the development of targeted treatments. Protocols are currently being developed for inducing cortical interneuron subtypes from mouse and human pluripotent stem cells. This review sets out to summarize the progress made in cortical interneuron development, fetal tissue transplantation and the recent advance in stem cell differentiation towards interneurons.

  9. Music listening engages specific cortical regions within the temporal lobes: differences between musicians and non-musicians.

    Science.gov (United States)

    Angulo-Perkins, Arafat; Aubé, William; Peretz, Isabelle; Barrios, Fernando A; Armony, Jorge L; Concha, Luis

    2014-10-01

    Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (aSTG) (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific

  10. Auditory cortical change detection in adults with Asperger syndrome.

    Science.gov (United States)

    Lepistö, Tuulia; Nieminen-von Wendt, Taina; von Wendt, Lennart; Näätänen, Risto; Kujala, Teija

    2007-03-06

    The present study investigated whether auditory deficits reported in children with Asperger syndrome (AS) are also present in adulthood. To this end, event-related potentials (ERPs) were recorded from adults with AS for duration, pitch, and phonetic changes in vowels, and for acoustically matched non-speech stimuli. These subjects had enhanced mismatch negativity (MMN) amplitudes particularly for pitch and duration deviants, indicating enhanced sound-discrimination abilities. Furthermore, as reflected by the P3a, their involuntary orienting was enhanced for changes in non-speech sounds, but tended to be deficient for changes in speech sounds. The results are consistent with those reported earlier in children with AS, except for the duration-MMN, which was diminished in children and enhanced in adults.

  11. A synergy-based hand control is encoded in human motor cortical areas

    Science.gov (United States)

    Leo, Andrea; Handjaras, Giacomo; Bianchi, Matteo; Marino, Hamal; Gabiccini, Marco; Guidi, Andrea; Scilingo, Enzo Pasquale; Pietrini, Pietro; Bicchi, Antonio; Santello, Marco; Ricciardi, Emiliano

    2016-01-01

    How the human brain controls hand movements to carry out different tasks is still debated. The concept of synergy has been proposed to indicate functional modules that may simplify the control of hand postures by simultaneously recruiting sets of muscles and joints. However, whether and to what extent synergic hand postures are encoded as such at a cortical level remains unknown. Here, we combined kinematic, electromyography, and brain activity measures obtained by functional magnetic resonance imaging while subjects performed a variety of movements towards virtual objects. Hand postural information, encoded through kinematic synergies, were represented in cortical areas devoted to hand motor control and successfully discriminated individual grasping movements, significantly outperforming alternative somatotopic or muscle-based models. Importantly, hand postural synergies were predicted by neural activation patterns within primary motor cortex. These findings support a novel cortical organization for hand movement control and open potential applications for brain-computer interfaces and neuroprostheses. DOI: http://dx.doi.org/10.7554/eLife.13420.001 PMID:26880543

  12. CORTICAL ENCODING OF SIGNALS IN NOISE: EFFECTS OF STIMULUS TYPE AND RECORDING PARADIGM

    Science.gov (United States)

    Billings, Curtis J.; Bennett, Keri O.; Molis, Michelle R.; Leek, Marjorie R.

    2010-01-01

    Objectives Perception-in-noise deficits have been demonstrated across many populations and listening conditions. Many factors contribute to successful perception of auditory stimuli in noise, including neural encoding in the central auditory system. Physiological measures such as cortical auditory evoked potentials can provide a view of neural encoding at the level of the cortex that may inform our understanding of listeners’ abilities to perceive signals in the presence of background noise. In order to understand signal-in-noise neural encoding better, we set out to determine the effect of signal type, noise type, and evoking paradigm on the P1-N1-P2 complex. Design Tones and speech stimuli were presented to nine individuals in quiet, and in three background noise types: continuous speech spectrum noise, interrupted speech spectrum noise, and four-talker babble at a signal-to-noise ratio of −3 dB. In separate sessions, cortical auditory evoked potentials were evoked by a passive homogenous paradigm (single repeating stimulus) and an active oddball paradigm. Results The results for the N1 component indicated significant effects of signal type, noise type, and evoking paradigm. While components P1 and P2 also had significant main effects of these variables, only P2 demonstrated significant interactions among these variables. Conclusions Signal type, noise type, and evoking paradigm all must be carefully considered when interpreting signal-in-noise evoked potentials. Furthermore, these data confirm the possible usefulness of CAEPs as an aid to understanding perception-in-noise deficits. PMID:20890206

  13. Cortical Thought Theory: A Working Model of the Human Gestalt Mechanism.

    Science.gov (United States)

    1985-07-01

    2. The Artificial Inteligence Perspective • -° 2.1 Introduction: Chapter Overview This chapter addresses the development of a...6 . . 2. The Artificial Intelligence Perspective ... .......... 9 2.1 Introduction: Chapter Overview .... ........... 9 2.2 The Problem 9...new unified theory of human brain function called Cortical Thought Theory (CTT). The analysis integrates the disciplines of Artificial Intelligence

  14. Empathy and the somatotopic auditory mirror system in humans

    NARCIS (Netherlands)

    Gazzola, Valeria; Aziz-Zadeh, Lisa; Keysers, Christian

    2006-01-01

    How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions [1, 2]. This system might be critical for auditory action understanding and language

  15. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Maria eHerrojo Ruiz

    2014-09-01

    Full Text Available Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback.As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS.Overall, the present investigations are the first to demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN

  16. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  17. Effects of Background Noise on Cortical Encoding of Speech in Autism Spectrum Disorders

    Science.gov (United States)

    Russo, Nicole; Zecker, Steven; Trommer, Barbara; Chen, Julia; Kraus, Nina

    2009-01-01

    This study provides new evidence of deficient auditory cortical processing of speech in noise in autism spectrum disorders (ASD). Speech-evoked responses (approximately 100-300 ms) in quiet and background noise were evaluated in typically-developing (TD) children and children with ASD. ASD responses showed delayed timing (both conditions) and…

  18. Generation of human cortical neurons from a new immortal fetal neural stem cell line

    International Nuclear Information System (INIS)

    Cacci, E.; Villa, A.; Parmar, M.; Cavallaro, M.; Mandahl, N.; Lindvall, O.; Martinez-Serrano, A.; Kokaia, Z.

    2007-01-01

    Isolation and expansion of neural stem cells (NSCs) of human origin are crucial for successful development of cell therapy approaches in neurodegenerative diseases. Different epigenetic and genetic immortalization strategies have been established for long-term maintenance and expansion of these cells in vitro. Here we report the generation of a new, clonal NSC (hc-NSC) line, derived from human fetal cortical tissue, based on v-myc immortalization. Using immunocytochemistry, we show that these cells retain the characteristics of NSCs after more than 50 passages. Under proliferation conditions, when supplemented with epidermal and basic fibroblast growth factors, the hc-NSCs expressed neural stem/progenitor cell markers like nestin, vimentin and Sox2. When growth factors were withdrawn, proliferation and expression of v-myc and telomerase were dramatically reduced, and the hc-NSCs differentiated into glia and neurons (mostly glutamatergic and GABAergic, as well as tyrosine hydroxylase-positive, presumably dopaminergic neurons). RT-PCR analysis showed that the hc-NSCs retained expression of Pax6, Emx2 and Neurogenin2, which are genes associated with regionalization and cell commitment in cortical precursors during brain development. Our data indicate that this hc-NSC line could be useful for exploring the potential of human NSCs to replace dead or damaged cortical cells in animal models of acute and chronic neurodegenerative diseases. Taking advantage of its clonality and homogeneity, this cell line will also be a valuable experimental tool to study the regulatory role of intrinsic and extrinsic factors in human NSC biology

  19. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  20. Optimal staining methods for delineation of cortical areas and neuron counts in human brains.

    Science.gov (United States)

    Uylings, H B; Zilles, K; Rajkowska, G

    1999-04-01

    For cytoarchitectonic delineation of cortical areas in human brain, the Gallyas staining for somata with its sharp contrast between cell bodies and neuropil is preferable to the classical Nissl staining, the more so when an image analysis system is used. This Gallyas staining, however, does not appear to be appropriate for counting neuron numbers in pertinent brain areas, due to the lack of distinct cytological features between small neurons and glial cells. For cell counting Nissl is preferable. In an optimal design for cell counting at least both the Gallyas and the Nissl staining must be applied, the former staining for cytoarchitectural delineaton of cortical areas and the latter for counting the number of neurons in the pertinent cortical areas. Copyright 1999 Academic Press.

  1. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  2. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  3. Groupwise connectivity-based parcellation of the whole human cortical surface using watershed-driven dimension reduction.

    Science.gov (United States)

    Lefranc, Sandrine; Roca, Pauline; Perrot, Matthieu; Poupon, Cyril; Le Bihan, Denis; Mangin, Jean-François; Rivière, Denis

    2016-05-01

    Segregating the human cortex into distinct areas based on structural connectivity criteria is of widespread interest in neuroscience. This paper presents a groupwise connectivity-based parcellation framework for the whole cortical surface using a new high quality diffusion dataset of 79 healthy subjects. Our approach performs gyrus by gyrus to parcellate the whole human cortex. The main originality of the method is to compress for each gyrus the connectivity profiles used for the clustering without any anatomical prior information. This step takes into account the interindividual cortical and connectivity variability. To this end, we consider intersubject high density connectivity areas extracted using a surface-based watershed algorithm. A wide validation study has led to a fully automatic pipeline which is robust to variations in data preprocessing (tracking type, cortical mesh characteristics and boundaries of initial gyri), data characteristics (including number of subjects), and the main algorithmic parameters. A remarkable reproducibility is achieved in parcellation results for the whole cortex, leading to clear and stable cortical patterns. This reproducibility has been tested across non-overlapping subgroups and the validation is presented mainly on the pre- and postcentral gyri. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Electrical stimulation of the midbrain excites the auditory cortex asymmetrically.

    Science.gov (United States)

    Quass, Gunnar Lennart; Kurt, Simone; Hildebrandt, Jannis; Kral, Andrej

    2018-05-17

    Auditory midbrain implant users cannot achieve open speech perception and have limited frequency resolution. It remains unclear whether the spread of excitation contributes to this issue and how much it can be compensated by current-focusing, which is an effective approach in cochlear implants. The present study examined the spread of excitation in the cortex elicited by electric midbrain stimulation. We further tested whether current-focusing via bipolar and tripolar stimulation is effective with electric midbrain stimulation and whether these modes hold any advantage over monopolar stimulation also in conditions when the stimulation electrodes are in direct contact with the target tissue. Using penetrating multielectrode arrays, we recorded cortical population responses to single pulse electric midbrain stimulation in 10 ketamine/xylazine anesthetized mice. We compared monopolar, bipolar, and tripolar stimulation configurations with regard to the spread of excitation and the characteristic frequency difference between the stimulation/recording electrodes. The cortical responses were distributed asymmetrically around the characteristic frequency of the stimulated midbrain region with a strong activation in regions tuned up to one octave higher. We found no significant differences between monopolar, bipolar, and tripolar stimulation in threshold, evoked firing rate, or dynamic range. The cortical responses to electric midbrain stimulation are biased towards higher tonotopic frequencies. Current-focusing is not effective in direct contact electrical stimulation. Electrode maps should account for the asymmetrical spread of excitation when fitting auditory midbrain implants by shifting the frequency-bands downward and stimulating as dorsally as possible. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Functional cortical mapping of scale illusion

    International Nuclear Information System (INIS)

    Wang, Li-qun; Kuriki, Shinya

    2011-01-01

    We have studied cortical activation using 1.5 T fMRI during 'Scale Illusion', a kind of auditory illusion, in which subjects perceive smooth melodies while listening to dichotic irregular pitch sequences consisting of scale tones, in repeated phrases composed of eight tones. Four male and four female subjects listened to different stimuli, that including illusion-inducing tone sequence, monaural tone sequence and perceived pitch sequence with a control of white noises delivered to the right and left ears in random order. 32 scans with a repetition time (TR) 3 s Between 3 s interval for each type of the four stimuli were performed. In BOLD signals, activation was observed in the prefrontal and temporal cortices, parietal lobule and occipital areas by first-level group analysis. However, there existed large intersubject variability such that systematic tendency of the activation was not clear. The study will be continued to obtain larger number of subjects for group analysis. (author)

  6. Model cortical responses for the detection of perceptual onsets and beat tracking in singing

    NARCIS (Netherlands)

    Coath, M.; Denham, S.L.; Smith, L.M.; Honing, H.; Hazan, A.; Holonowicz, P.; Purwins, H.

    2009-01-01

    We describe a biophysically motivated model of auditory salience based on a model of cortical responses and present results that show that the derived measure of salience can be used to identify the position of perceptual onsets in a musical stimulus successfully. The salience measure is also shown

  7. Changes in regional cerebral blood flow during auditory cognitive tasks

    International Nuclear Information System (INIS)

    Ohyama, Masashi; Kitamura, Shin; Terashi, Akiro; Senda, Michio.

    1993-01-01

    In order to investigate the relation between auditory cognitive function and regional brain activation, we measured the changes in the regional cerebral blood flow (CBF) using positron emission tomography (PET) during the 'odd-ball' paradigm in ten normal healthy volunteers. The subjects underwent 3 tasks, twice for each, while the evoked potential was recorded. In these tasks, the auditory stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB from the earphones. Task A: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to only hear. Task B: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to push the button on detecting a tone. Task C: the stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB with a frequency of 1000 Hz (non-target) in 80% and 2000 Hz (target) in 20% at random, and the subject was instructed to push the button on detecting a target tone. The event related potential (P300) was observed in task C (Pz: 334.3±19.6 msec). At each task, the CBF was measured using PET with i.v. injection of 1.5 GBq of O-15 water. The changes in CBF associated with auditory cognition was evaluated by the difference between the CBF images in task C and B. Localized increase was observed in the anterior cingulate cortex (in all subjects), the bilateral associate auditory cortex, the prefrontal cortex and the parietal cortex. The latter three areas had a large individual variation in the location of foci. These results suggested the role of those cortical areas in auditory cognition. The anterior cingulate was most activated (15.0±2.24% of global CBF). This region was not activated in the condition of task B minus task A. The anterior cingulate is a part of Papez's circuit that is related to memory and other higher cortical function. These results suggested that this area may play an important role in cognition as well as in attention. (author)

  8. Cortical processes of speech illusions in the general population.

    Science.gov (United States)

    Schepers, E; Bodar, L; van Os, J; Lousberg, R

    2016-10-18

    There is evidence that experimentally elicited auditory illusions in the general population index risk for psychotic symptoms. As little is known about underlying cortical mechanisms of auditory illusions, an experiment was conducted to analyze processing of auditory illusions in a general population sample. In a follow-up design with two measurement moments (baseline and 6 months), participants (n = 83) underwent the White Noise task under simultaneous recording with a 14-lead EEG. An auditory illusion was defined as hearing any speech in a sound fragment containing white noise. A total number of 256 speech illusions (SI) were observed over the two measurements, with a high degree of stability of SI over time. There were 7 main effects of speech illusion on the EEG alpha band-the most significant indicating a decrease in activity at T3 (t = -4.05). Other EEG frequency bands (slow beta, fast beta, gamma, delta, theta) showed no significant associations with SI. SIs are characterized by reduced alpha activity in non-clinical populations. Given the association of SIs with psychosis, follow-up research is required to examine the possibility of reduced alpha activity mediating SIs in high risk and symptomatic populations.

  9. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  10. An overview of neural function and feedback control in human communication.

    Science.gov (United States)

    Hood, L J

    1998-01-01

    The speech and hearing mechanisms depend on accurate sensory information and intact feedback mechanisms to facilitate communication. This article provides a brief overview of some components of the nervous system important for human communication and some electrophysiological methods used to measure cortical function in humans. An overview of automatic control and feedback mechanisms in general and as they pertain to the speech motor system and control of the hearing periphery is also presented, along with a discussion of how the speech and auditory systems interact.

  11. Modeling vocalization with ECoG cortical activity recorded during vocal production in the macaque monkey.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Fujii, Naotaka; Averbeck, Bruno B; Mishkin, Mortimer

    2014-01-01

    Vocal production is an example of controlled motor behavior with high temporal precision. Previous studies have decoded auditory evoked cortical activity while monkeys listened to vocalization sounds. On the other hand, there have been few attempts at decoding motor cortical activity during vocal production. Here we recorded cortical activity during vocal production in the macaque with a chronically implanted electrocorticographic (ECoG) electrode array. The array detected robust activity in motor cortex during vocal production. We used a nonlinear dynamical model of the vocal organ to reduce the dimensionality of `Coo' calls produced by the monkey. We then used linear regression to evaluate the information in motor cortical activity for this reduced representation of calls. This simple linear model accounted for circa 65% of the variance in the reduced sound representations, supporting the feasibility of using the dynamical model of the vocal organ for decoding motor cortical activity during vocal production.

  12. For Better or Worse: The Effect of Prismatic Adaptation on Auditory Neglect

    Directory of Open Access Journals (Sweden)

    Isabel Tissieres

    2017-01-01

    Full Text Available Patients with auditory neglect attend less to auditory stimuli on their left and/or make systematic directional errors when indicating sound positions. Rightward prismatic adaptation (R-PA was repeatedly shown to alleviate symptoms of visuospatial neglect and once to restore partially spatial bias in dichotic listening. It is currently unknown whether R-PA affects only this ear-related symptom or also other aspects of auditory neglect. We have investigated the effect of R-PA on left ear extinction in dichotic listening, space-related inattention assessed by diotic listening, and directional errors in auditory localization in patients with auditory neglect. The most striking effect of R-PA was the alleviation of left ear extinction in dichotic listening, which occurred in half of the patients with initial deficit. In contrast to nonresponders, their lesions spared the right dorsal attentional system and posterior temporal cortex. The beneficial effect of R-PA on an ear-related performance contrasted with detrimental effects on diotic listening and auditory localization. The former can be parsimoniously explained by the SHD-VAS model (shift in hemispheric dominance within the ventral attentional system; Clarke and Crottaz-Herbette 2016, which is based on the R-PA-induced shift of the right-dominant ventral attentional system to the left hemisphere. The negative effects in space-related tasks may be due to the complex nature of auditory space encoding at a cortical level.

  13. Theta Phase Synchronization Is the Glue that Binds Human Associative Memory.

    Science.gov (United States)

    Clouter, Andrew; Shapiro, Kimron L; Hanslmayr, Simon

    2017-10-23

    Episodic memories are information-rich, often multisensory events that rely on binding different elements [1]. The elements that will constitute a memory episode are processed in specialized but distinct brain modules. The binding of these elements is most likely mediated by fast-acting long-term potentiation (LTP), which relies on the precise timing of neural activity [2]. Theta oscillations in the hippocampus orchestrate such timing as demonstrated by animal studies in vitro [3, 4] and in vivo [5, 6], suggesting a causal role of theta activity for the formation of complex memory episodes, but direct evidence from humans is missing. Here, we show that human episodic memory formation depends on phase synchrony between different sensory cortices at the theta frequency. By modulating the luminance of visual stimuli and the amplitude of auditory stimuli, we directly manipulated the degree of phase synchrony between visual and auditory cortices. Memory for sound-movie associations was significantly better when the stimuli were presented in phase compared to out of phase. This effect was specific to theta (4 Hz) and did not occur in slower (1.7 Hz) or faster (10.5 Hz) frequencies. These findings provide the first direct evidence that episodic memory formation in humans relies on a theta-specific synchronization mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Pharmacological modulation of cortical excitability shifts induced by transcranial direct current stimulation in humans.

    Science.gov (United States)

    Nitsche, M A; Fricke, K; Henschke, U; Schlitterlau, A; Liebetanz, D; Lang, N; Henning, S; Tergau, F; Paulus, W

    2003-11-15

    Transcranial direct current stimulation (tDCS) of the human motor cortex results in polarity-specific shifts of cortical excitability during and after stimulation. Anodal tDCS enhances and cathodal stimulation reduces excitability. Animal experiments have demonstrated that the effect of anodal tDCS is caused by neuronal depolarisation, while cathodal tDCS hyperpolarises cortical neurones. However, not much is known about the ion channels and receptors involved in these effects. Thus, the impact of the sodium channel blocker carbamazepine, the calcium channel blocker flunarizine and the NMDA receptor antagonist dextromethorphane on tDCS-elicited motor cortical excitability changes of healthy human subjects were tested. tDCS-protocols inducing excitability alterations (1) only during tDCS and (2) eliciting long-lasting after-effects were applied after drug administration. Carbamazepine selectively eliminated the excitability enhancement induced by anodal stimulation during and after tDCS. Flunarizine resulted in similar changes. Antagonising NMDA receptors did not alter current-generated excitability changes during a short stimulation, which elicits no after-effects, but prevented the induction of long-lasting after-effects independent of their direction. These results suggest that, like in other animals, cortical excitability shifts induced during tDCS in humans also depend on membrane polarisation, thus modulating the conductance of sodium and calcium channels. Moreover, they suggest that the after-effects may be NMDA receptor dependent. Since NMDA receptors are involved in neuroplastic changes, the results suggest a possible application of tDCS in the modulation or induction of these processes in a clinical setting. The selective elimination of tDCS-driven excitability enhancements by carbamazepine proposes a role for this drug in focussing the effects of cathodal tDCS, which may have important future clinical applications.

  15. 101 labeled brain images and a consistent human cortical labeling protocol

    Directory of Open Access Journals (Sweden)

    Arno eKlein

    2012-12-01

    Full Text Available We introduce the Mindboggle-101 dataset, the largest and most complete set of free, publicly accessible, manually labeled human brain images. To manually label the macroscopic anatomy in magnetic resonance images of 101 healthy participants, we created a new cortical labeling protocol that relies on robust anatomical landmarks and minimal manual edits after initialization with automated labels. The Desikan-Killiany-Tourville (DKT protocol is intended to improve the ease, consistency, and accuracy of labeling human cortical areas. Given how difficult it is to label brains, the Mindboggle-101 dataset is intended to serve as brain atlases for use in labeling other brains, as a normative dataset to establish morphometric variation in a healthy population for comparison against clinical populations, and contribute to the development, training, testing, and evaluation of automated registration and labeling algorithms. To this end, we also introduce benchmarks for the evaluation of such algorithms by comparing our manual labels with labels automatically generated by probabilistic and multi-atlas registration-based approaches. All data and related software and updated information are available on the http://www.mindboggle.info/data/ website.

  16. 101 Labeled Brain Images and a Consistent Human Cortical Labeling Protocol

    Science.gov (United States)

    Klein, Arno; Tourville, Jason

    2012-01-01

    We introduce the Mindboggle-101 dataset, the largest and most complete set of free, publicly accessible, manually labeled human brain images. To manually label the macroscopic anatomy in magnetic resonance images of 101 healthy participants, we created a new cortical labeling protocol that relies on robust anatomical landmarks and minimal manual edits after initialization with automated labels. The “Desikan–Killiany–Tourville” (DKT) protocol is intended to improve the ease, consistency, and accuracy of labeling human cortical areas. Given how difficult it is to label brains, the Mindboggle-101 dataset is intended to serve as brain atlases for use in labeling other brains, as a normative dataset to establish morphometric variation in a healthy population for comparison against clinical populations, and contribute to the development, training, testing, and evaluation of automated registration and labeling algorithms. To this end, we also introduce benchmarks for the evaluation of such algorithms by comparing our manual labels with labels automatically generated by probabilistic and multi-atlas registration-based approaches. All data and related software and updated information are available on the http://mindboggle.info/data website. PMID:23227001

  17. Cortical Response Variability as a Developmental Index of Selective Auditory Attention

    Science.gov (United States)

    Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina

    2014-01-01

    Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory…

  18. Cooling of the auditory cortex modifies neuronal activity in the inferior colliculus in rats

    Czech Academy of Sciences Publication Activity Database

    Popelář, Jiří; Šuta, Daniel; Lindovský, Jiří; Bureš, Zbyněk; Pysaněnko, Kateryna; Chumak, Tetyana; Syka, Josef

    2016-01-01

    Roč. 332, feb (2016), s. 7-16 ISSN 0378-5955 R&D Projects: GA ČR(CZ) GBP304/12/G069; GA ČR(CZ) GAP303/12/1347 Institutional support: RVO:68378041 Keywords : auditory cortex * cooling * cortical inactivation * efferent system Subject RIV: ED - Physiology Impact factor: 2.906, year: 2016

  19. Cortical Thickness and Episodic Memory Impairment in Systemic Lupus Erythematosus.

    Science.gov (United States)

    Bizzo, Bernardo Canedo; Sanchez, Tiago Arruda; Tukamoto, Gustavo; Zimmermann, Nicolle; Netto, Tania Maria; Gasparetto, Emerson Leandro

    2017-01-01

    The purpose of this study was to investigate differences in brain cortical thickness of systemic lupus erythematosus (SLE) patients with and without episodic memory impairment and healthy controls. We studied 51 patients divided in 2 groups (SLE with episodic memory deficit, n = 17; SLE without episodic memory deficit, n = 34) by the Rey Auditory Verbal Learning Test and 34 healthy controls. Groups were paired based on sex, age, education, Mini-Mental State Examination score, and accumulation of disease burden. Cortical thickness from magnetic resonance imaging scans was determined using the FreeSurfer software package. SLE patients with episodic memory deficits presented reduced cortical thickness in the left supramarginal cortex and superior temporal gyrus when compared to the control group and in the right superior frontal, caudal, and rostral middle frontal and precentral gyri when compared to the SLE group without episodic memory impairment considering time since diagnosis of SLE as covaried. There were no significant differences in the cortical thickness between the SLE without episodic memory and control groups. Different memory-related cortical regions thinning were found in the episodic memory deficit group when individually compared to the groups of patients without memory impairment and healthy controls. Copyright © 2016 by the American Society of Neuroimaging.

  20. Propofol disrupts functional interactions between sensory and high-order processing of auditory verbal memory.

    Science.gov (United States)

    Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G

    2012-10-01

    Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.

  1. Auditory short-term memory behaves like visual short-term memory.

    Directory of Open Access Journals (Sweden)

    Kristina M Visscher

    2007-03-01

    Full Text Available Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches; the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples. Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1 the probe's similarity to the remembered list items (summed similarity, and (2 the similarity between the items in memory (inter-item homogeneity. A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  2. Auditory short-term memory behaves like visual short-term memory.

    Science.gov (United States)

    Visscher, Kristina M; Kaplan, Elina; Kahana, Michael J; Sekuler, Robert

    2007-03-01

    Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  3. Effects of noise-induced hearing loss on parvalbumin and perineuronal net expression in the mouse primary auditory cortex.

    Science.gov (United States)

    Nguyen, Anna; Khaleel, Haroun M; Razak, Khaleel A

    2017-07-01

    Noise induced hearing loss is associated with increased excitability in the central auditory system but the cellular correlates of such changes remain to be characterized. Here we tested the hypothesis that noise-induced hearing loss causes deterioration of perineuronal nets (PNNs) in the auditory cortex of mice. PNNs are specialized extracellular matrix components that commonly enwrap cortical parvalbumin (PV) containing GABAergic interneurons. Compared to somatosensory and visual cortex, relatively less is known about PV/PNN expression patterns in the primary auditory cortex (A1). Whether changes to cortical PNNs follow acoustic trauma remains unclear. The first aim of this study was to characterize PV/PNN expression in A1 of adult mice. PNNs increase excitability of PV+ inhibitory neurons and confer protection to these neurons against oxidative stress. Decreased PV/PNN expression may therefore lead to a reduction in cortical inhibition. The second aim of this study was to examine PV/PNN expression in superficial (I-IV) and deep cortical layers (V-VI) following noise trauma. Exposing mice to loud noise caused an increase in hearing threshold that lasted at least 30 days. PV and PNN expression in A1 was analyzed at 1, 10 and 30 days following the exposure. No significant changes were observed in the density of PV+, PNN+, or PV/PNN co-localized cells following hearing loss. However, a significant layer- and cell type-specific decrease in PNN intensity was seen following hearing loss. Some changes were present even at 1 day following noise exposure. Attenuation of PNN may contribute to changes in excitability in cortex following noise trauma. The regulation of PNN may open up a temporal window for altered excitability in the adult brain that is then stabilized at a new and potentially pathological level such as in tinnitus. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Reverse Engineering Tone-Deafness: Disrupting Pitch-Matching by Creating Temporary Dysfunctions in the Auditory-Motor Network

    Directory of Open Access Journals (Sweden)

    Anja Hohmann

    2018-01-01

    Full Text Available Perceiving and producing vocal sounds are important functions of the auditory-motor system and are fundamental to communication. Prior studies have identified a network of brain regions involved in pitch production, specifically pitch matching. Here we reverse engineer the function of the auditory perception-production network by targeting specific cortical regions (e.g., right and left posterior superior temporal (pSTG and posterior inferior frontal gyri (pIFG with cathodal transcranial direct current stimulation (tDCS—commonly found to decrease excitability in the underlying cortical region—allowing us to causally test the role of particular nodes in this network. Performance on a pitch-matching task was determined before and after 20 min of cathodal stimulation. Acoustic analyses of pitch productions showed impaired accuracy after cathodal stimulation to the left pIFG and the right pSTG in comparison to sham stimulation. Both regions share particular roles in the feedback and feedforward motor control of pitched vocal production with a differential hemispheric dominance.

  5. Cortical networks for encoding near and far space in the non-human primate.

    Science.gov (United States)

    Cléry, Justine; Guipponi, Olivier; Odouard, Soline; Wardak, Claire; Ben Hamed, Suliann

    2018-04-19

    While extra-personal space is often erroneously considered as a unique entity, early neuropsychological studies report a dissociation between near and far space processing both in humans and in monkeys. Here, we use functional MRI in a naturalistic 3D environment to describe the non-human primate near and far space cortical networks. We describe the co-occurrence of two extended functional networks respectively dedicated to near and far space processing. Specifically, far space processing involves occipital, temporal, parietal, posterior cingulate as well as orbitofrontal regions not activated by near space, possibly subserving the processing of the shape and identity of objects. In contrast, near space processing involves temporal, parietal, prefrontal and premotor regions not activated by far space, possibly subserving the preparation of an arm/hand mediated action in this proximal space. Interestingly, this network also involves somatosensory regions, suggesting a cross-modal anticipation of touch by a nearby object. Last, we also describe cortical regions that process both far and near space with a preference for one or the other. This suggests a continuous encoding of relative distance to the body, in the form of a far-to-near gradient. We propose that these cortical gradients in space representation subserve the physically delineable peripersonal spaces described in numerous psychology and psychophysics studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Cortical plasticity induced by short-term multimodal musical rhythm training.

    Directory of Open Access Journals (Sweden)

    Claudia Lappe

    Full Text Available Performing music is a multimodal experience involving the visual, auditory, and somatosensory modalities as well as the motor system. Therefore, musical training is an excellent model to study multimodal brain plasticity. Indeed, we have previously shown that short-term piano practice increase the magnetoencephalographic (MEG response to melodic material in novice players. Here we investigate the impact of piano training using a rhythmic-focused exercise on responses to rhythmic musical material. Musical training with non musicians was conducted over a period of two weeks. One group (sensorimotor-auditory, SA learned to play a piano sequence with a distinct musical rhythm, another group (auditory, A listened to, and evaluated the rhythmic accuracy of the performances of the SA-group. Training-induced cortical plasticity was evaluated using MEG, comparing the mismatch negativity (MMN in response to occasional rhythmic deviants in a repeating rhythm pattern before and after training. The SA-group showed a significantly greater enlargement of MMN and P2 to deviants after training compared to the A- group. The training-induced increase of the rhythm MMN was bilaterally expressed in contrast to our previous finding where the MMN for deviants in the pitch domain showed a larger right than left increase. The results indicate that when auditory experience is strictly controlled during training, involvement of the sensorimotor system and perhaps increased attentional recources that are needed in producing rhythms lead to more robust plastic changes in the auditory cortex compared to when rhythms are simply attended to in the auditory domain in the absence of motor production.

  7. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  8. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  9. Influence of mesh density, cortical thickness and material properties on human rib fracture prediction.

    Science.gov (United States)

    Li, Zuoping; Kindig, Matthew W; Subit, Damien; Kent, Richard W

    2010-11-01

    The purpose of this paper was to investigate the sensitivity of the structural responses and bone fractures of the ribs to mesh density, cortical thickness, and material properties so as to provide guidelines for the development of finite element (FE) thorax models used in impact biomechanics. Subject-specific FE models of the second, fourth, sixth and tenth ribs were developed to reproduce dynamic failure experiments. Sensitivity studies were then conducted to quantify the effects of variations in mesh density, cortical thickness, and material parameters on the model-predicted reaction force-displacement relationship, cortical strains, and bone fracture locations for all four ribs. Overall, it was demonstrated that rib FE models consisting of 2000-3000 trabecular hexahedral elements (weighted element length 2-3mm) and associated quadrilateral cortical shell elements with variable thickness more closely predicted the rib structural responses and bone fracture force-failure displacement relationships observed in the experiments (except the fracture locations), compared to models with constant cortical thickness. Further increases in mesh density increased computational cost but did not markedly improve model predictions. A ±30% change in the major material parameters of cortical bone lead to a -16.7 to 33.3% change in fracture displacement and -22.5 to +19.1% change in the fracture force. The results in this study suggest that human rib structural responses can be modeled in an accurate and computationally efficient way using (a) a coarse mesh of 2000-3000 solid elements, (b) cortical shells elements with variable thickness distribution and (c) a rate-dependent elastic-plastic material model. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Towards Clinical Application of Neurotrophic Factors to the Auditory Nerve; Assessment of Safety and Efficacy by a Systematic Review of Neurotrophic Treatments in Humans

    NARCIS (Netherlands)

    Bezdjian, Aren; Kraaijenga, Véronique J C; Ramekers, Dyan; Versnel, Huib; Thomeer, Hans G X M; Klis, Sjaak F L; Grolman, Wilko

    2016-01-01

    Animal studies have evidenced protection of the auditory nerve by exogenous neurotrophic factors. In order to assess clinical applicability of neurotrophic treatment of the auditory nerve, the safety and efficacy of neurotrophic therapies in various human disorders were systematically reviewed.

  11. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Human-Specific Cortical Synaptic Connections and Their Plasticity: Is That What Makes Us Human?

    Directory of Open Access Journals (Sweden)

    Joana Lourenço

    2017-01-01

    Full Text Available One outstanding difference between Homo sapiens and other mammals is the ability to perform highly complex cognitive tasks and behaviors, such as language, abstract thinking, and cultural diversity. How is this accomplished? According to one prominent theory, cognitive complexity is proportional to the repetition of specific computational modules over a large surface expansion of the cerebral cortex (neocortex. However, the human neocortex was shown to also possess unique features at the cellular and synaptic levels, raising the possibility that expanding the computational module is not the only mechanism underlying complex thinking. In a study published in PLOS Biology, Szegedi and colleagues analyzed a specific cortical circuit from live postoperative human tissue, showing that human-specific, very powerful excitatory connections between principal pyramidal neurons and inhibitory neurons are highly plastic. This suggests that exclusive plasticity of specific microcircuits might be considered among the mechanisms endowing the human neocortex with the ability to perform highly complex cognitive tasks.

  13. Human cerebral cortices: signal variation on diffusion-weighted MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Asao, Chiaki [Kumamoto Regional Medical Center, Department of Radiology, Kumamoto (Japan); National Hospital Organization Kumamoto Medical Center, Department of Radiology, Kumamoto (Japan); Hirai, Toshinori; Yamashita, Yasuyuki [Kumamoto University Graduate School of Medical Sciences, Department of Diagnostic Radiology, Kumamoto (Japan); Yoshimatsu, Shunji [National Hospital Organization Kumamoto Medical Center, Department of Radiology, Kumamoto (Japan); Matsukawa, Tetsuya; Imuta, Masanori [Kumamoto Regional Medical Center, Department of Radiology, Kumamoto (Japan); Sagara, Katsuro [Kumamoto Regional Medical Center, Department of Internal Medicine, Kumamoto (Japan)

    2008-03-15

    We have often encountered high signal intensity (SI) of the cingulate gyrus and insula during diffusion-weighted magnetic resonance imaging (DW-MRI) on neurologically healthy adults. To date, cortical signal heterogeneity on DW images has not been investigated systematically. The purpose of our study was to determine whether there is regional signal variation in the brain cortices of neurologically healthy adults on DW-MR images. The SI of the cerebral cortices on DW-MR images at 1.5 T was evaluated in 50 neurologically healthy subjects (34 men, 16 women; age range 33-84 years; mean age 57.6 years). The cortical SI in the cingulate gyrus, insula, and temporal, occipital, and parietal lobes was graded relative to the SI of the frontal lobe. Contrast-to-noise ratios (CNRs) on DW-MR images were compared for each cortical area. Diffusion changes were analyzed by visually assessment of the differences in appearance among the cortices on apparent diffusion coefficient (ADC) maps. Increased SI was frequently seen in the cingulate gyrus and insula regardless of patient age. There were no significant gender- or laterality-related differences. The CNR was significantly higher in the cingulate gyrus and insula than in the other cortices (p <.01), and significant differences existed among the cortical regions (p <.001). There were no apparent ADC differences among the cortices on ADC maps. Regional signal variation of the brain cortices was observed on DW-MR images of healthy subjects, and the cingulate gyrus and insula frequently manifested high SI. These findings may help in the recognition of cortical signal abnormalities as visualized on DW-MR images. (orig.)

  14. Human cerebral cortices: signal variation on diffusion-weighted MR imaging

    International Nuclear Information System (INIS)

    Asao, Chiaki; Hirai, Toshinori; Yamashita, Yasuyuki; Yoshimatsu, Shunji; Matsukawa, Tetsuya; Imuta, Masanori; Sagara, Katsuro

    2008-01-01

    We have often encountered high signal intensity (SI) of the cingulate gyrus and insula during diffusion-weighted magnetic resonance imaging (DW-MRI) on neurologically healthy adults. To date, cortical signal heterogeneity on DW images has not been investigated systematically. The purpose of our study was to determine whether there is regional signal variation in the brain cortices of neurologically healthy adults on DW-MR images. The SI of the cerebral cortices on DW-MR images at 1.5 T was evaluated in 50 neurologically healthy subjects (34 men, 16 women; age range 33-84 years; mean age 57.6 years). The cortical SI in the cingulate gyrus, insula, and temporal, occipital, and parietal lobes was graded relative to the SI of the frontal lobe. Contrast-to-noise ratios (CNRs) on DW-MR images were compared for each cortical area. Diffusion changes were analyzed by visually assessment of the differences in appearance among the cortices on apparent diffusion coefficient (ADC) maps. Increased SI was frequently seen in the cingulate gyrus and insula regardless of patient age. There were no significant gender- or laterality-related differences. The CNR was significantly higher in the cingulate gyrus and insula than in the other cortices (p <.01), and significant differences existed among the cortical regions (p <.001). There were no apparent ADC differences among the cortices on ADC maps. Regional signal variation of the brain cortices was observed on DW-MR images of healthy subjects, and the cingulate gyrus and insula frequently manifested high SI. These findings may help in the recognition of cortical signal abnormalities as visualized on DW-MR images. (orig.)

  15. Neural correlates of auditory short-term memory in rostral superior temporal cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2014-12-01

    Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Enhanced brainstem and cortical evoked response amplitudes: single-trial covariance analysis.

    Science.gov (United States)

    Galbraith, G C

    2001-06-01

    The purpose of the present study was to develop analytic procedures that improve the definition of sensory evoked response components. Such procedures could benefit all recordings but would especially benefit difficult recordings where many trials are contaminated by muscle and movement artifacts. First, cross-correlation and latency adjustment analyses were applied to the human brainstem frequency-following response and cortical auditory evoked response recorded on the same trials. Lagged cross-correlation functions were computed, for each of 17 subjects, between single-trial data and templates consisting of the sinusoid stimulus waveform for the brainstem response and the subject's own smoothed averaged evoked response P2 component for the cortical response. Trials were considered in the analysis only if the maximum correlation-squared (r2) exceeded .5 (negatively correlated trials were thus included). Identical correlation coefficients may be based on signals with quite different amplitudes, but it is possible to assess amplitude by the nonnormalized covariance function. Next, an algorithm is applied in which each trial with negative covariance is matched to a trial with similar, but positive, covariance and these matched-trial pairs are deleted. When an evoked response signal is present in the data, the majority of trials positively correlate with the template. Thus, a residual of positively correlated trials remains after matched covariance trials are deleted. When these residual trials are averaged, the resulting brainstem and cortical responses show greatly enhanced amplitudes. This result supports the utility of this analysis technique in clarifying and assessing evoked response signals.

  17. Music training alters the course of adolescent auditory development

    Science.gov (United States)

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  18. Biomimetic Sonar for Electrical Activation of the Auditory Pathway

    Directory of Open Access Journals (Sweden)

    D. Menniti

    2017-01-01

    Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.

  19. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  20. Tuning of Human Modulation Filters Is Carrier-Frequency Dependent

    Science.gov (United States)

    Simpson, Andrew J. R.; Reiss, Joshua D.; McAlpine, David

    2013-01-01

    Recent studies employing speech stimuli to investigate ‘cocktail-party’ listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpinning a strategy for separating speech from background noise. Here, we characterize modulation filters in human listeners using a novel behavioral method. Within an ‘inverted’ adaptive forced-choice increment detection task, listening level was varied whilst contrast was held constant for ramped increments with effective modulation rates between 0.5 and 33 Hz. Our data suggest that modulation filters are tonotopically organized (i.e., vary along the primary, frequency-organized, dimension). This suggests that the human auditory system is optimized to track rapid (phonemic) modulations at high sound-frequencies and slow (prosodic/syllabic) modulations at low frequencies. PMID:24009759

  1. Language experience enhances early cortical pitch-dependent responses

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Ananthakrishnan, Saradha; Vijayaraghavan, Venkatakrishnan

    2014-01-01

    Pitch processing at cortical and subcortical stages of processing is shaped by language experience. We recently demonstrated that specific components of the cortical pitch response (CPR) index the more rapidly-changing portions of the high rising Tone 2 of Mandarin Chinese, in addition to marking pitch onset and sound offset. In this study, we examine how language experience (Mandarin vs. English) shapes the processing of different temporal attributes of pitch reflected in the CPR components using stimuli representative of within-category variants of Tone 2. Results showed that the magnitude of CPR components (Na-Pb and Pb-Nb) and the correlation between these two components and pitch acceleration were stronger for the Chinese listeners compared to English listeners for stimuli that fell within the range of Tone 2 citation forms. Discriminant function analysis revealed that the Na-Pb component was more than twice as important as Pb-Nb in grouping listeners by language affiliation. In addition, a stronger stimulus-dependent, rightward asymmetry was observed for the Chinese group at the temporal, but not frontal, electrode sites. This finding may reflect selective recruitment of experience-dependent, pitch-specific mechanisms in right auditory cortex to extract more complex, time-varying pitch patterns. Taken together, these findings suggest that long-term language experience shapes early sensory level processing of pitch in the auditory cortex, and that the sensitivity of the CPR may vary depending on the relative linguistic importance of specific temporal attributes of dynamic pitch. PMID:25506127

  2. Predictive timing functions of cortical beta oscillations are impaired in Parkinson's disease and influenced by L-DOPA and deep brain stimulation of the subthalamic nucleus.

    Science.gov (United States)

    Gulberti, A; Moll, C K E; Hamel, W; Buhmann, C; Koeppen, J A; Boelmans, K; Zittel, S; Gerloff, C; Westphal, M; Schneider, T R; Engel, A K

    2015-01-01

    Cortex-basal ganglia circuits participate in motor timing and temporal perception, and are important for the dynamic configuration of sensorimotor networks in response to exogenous demands. In Parkinson's disease (PD) patients, rhythmic auditory stimulation (RAS) induces motor performance benefits. Hitherto, little is known concerning contributions of the basal ganglia to sensory facilitation and cortical responses to RAS in PD. Therefore, we conducted an EEG study in 12 PD patients before and after surgery for subthalamic nucleus deep brain stimulation (STN-DBS) and in 12 age-matched controls. Here we investigated the effects of levodopa and STN-DBS on resting-state EEG and on the cortical-response profile to slow and fast RAS in a passive-listening paradigm focusing on beta-band oscillations, which are important for auditory-motor coupling. The beta-modulation profile to RAS in healthy participants was characterized by local peaks preceding and following auditory stimuli. In PD patients RAS failed to induce pre-stimulus beta increases. The absence of pre-stimulus beta-band modulation may contribute to impaired rhythm perception in PD. Moreover, post-stimulus beta-band responses were highly abnormal during fast RAS in PD patients. Treatment with levodopa and STN-DBS reinstated a post-stimulus beta-modulation profile similar to controls, while STN-DBS reduced beta-band power in the resting-state. The treatment-sensitivity of beta oscillations suggests that STN-DBS may specifically improve timekeeping functions of cortical beta oscillations during fast auditory pacing.

  3. Serial auditory-evoked potentials in the diagnosis and monitoring of a child with Landau-Kleffner syndrome.

    Science.gov (United States)

    Plyler, Erin; Harkrider, Ashley W

    2013-01-01

    A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.

  4. Effects of an NMDA antagonist on the auditory mismatch negativity response to transcranial direct current stimulation.

    Science.gov (United States)

    Impey, Danielle; de la Salle, Sara; Baddeley, Ashley; Knott, Verner

    2017-05-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a weak constant current to alter cortical excitability and activity temporarily. tDCS-induced increases in neuronal excitability and performance improvements have been observed following anodal stimulation of brain regions associated with visual and motor functions, but relatively little research has been conducted with respect to auditory processing. Recently, pilot study results indicate that anodal tDCS can increase auditory deviance detection, whereas cathodal tDCS decreases auditory processing, as measured by a brain-based event-related potential (ERP), mismatch negativity (MMN). As evidence has shown that tDCS lasting effects may be dependent on N-methyl-D-aspartate (NMDA) receptor activity, the current study investigated the use of dextromethorphan (DMO), an NMDA antagonist, to assess possible modulation of tDCS's effects on both MMN and working memory performance. The study, conducted in 12 healthy volunteers, involved four laboratory test sessions within a randomised, placebo and sham-controlled crossover design that compared pre- and post-anodal tDCS over the auditory cortex (2 mA for 20 minutes to excite cortical activity temporarily and locally) and sham stimulation (i.e. device is turned off) during both DMO (50 mL) and placebo administration. Anodal tDCS increased MMN amplitudes with placebo administration. Significant increases were not seen with sham stimulation or with anodal stimulation during DMO administration. With sham stimulation (i.e. no stimulation), DMO decreased MMN amplitudes. Findings from this study contribute to the understanding of underlying neurobiological mechanisms mediating tDCS sensory and memory improvements.

  5. A cortical–hippocampal–cortical loop of information processing during memory consolidation

    Science.gov (United States)

    Rothschild, Gideon; Eban, Elad; Frank, Loren M

    2018-01-01

    Hippocampal replay during sharp-wave ripple events (SWRs) is thought to drive memory consolidation in hippocampal and cortical circuits. Changes in neocortical activity can precede SWR events, but whether and how these changes influence the content of replay remains unknown. Here we show that during sleep there is a rapid cortical–hippocampal–cortical loop of information flow around the times of SWRs. We recorded neural activity in auditory cortex (AC) and hippocampus of rats as they learned a sound-guided task and during sleep. We found that patterned activation in AC precedes and predicts the subsequent content of hippocampal activity during SWRs, while hippocampal patterns during SWRs predict subsequent AC activity. Delivering sounds during sleep biased AC activity patterns, and sound-biased AC patterns predicted subsequent hippocampal activity. These findings suggest that activation of specific cortical representations during sleep influences the identity of the memories that are consolidated into long-term stores. PMID:27941790

  6. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials

    Science.gov (United States)

    Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram

    2018-02-01

    Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.

  7. Dynamics of human subthalamic neuron phase-locking to motor and sensory cortical oscillations during movement.

    Science.gov (United States)

    Lipski, Witold J; Wozny, Thomas A; Alhourani, Ahmad; Kondylis, Efstathios D; Turner, Robert S; Crammond, Donald J; Richardson, Robert Mark

    2017-09-01

    Coupled oscillatory activity recorded between sensorimotor regions of the basal ganglia-thalamocortical loop is thought to reflect information transfer relevant to movement. A neuronal firing-rate model of basal ganglia-thalamocortical circuitry, however, has dominated thinking about basal ganglia function for the past three decades, without knowledge of the relationship between basal ganglia single neuron firing and cortical population activity during movement itself. We recorded activity from 34 subthalamic nucleus (STN) neurons, simultaneously with cortical local field potentials and motor output, in 11 subjects with Parkinson's disease (PD) undergoing awake deep brain stimulator lead placement. STN firing demonstrated phase synchronization to both low- and high-beta-frequency cortical oscillations, and to the amplitude envelope of gamma oscillations, in motor cortex. We found that during movement, the magnitude of this synchronization was dynamically modulated in a phase-frequency-specific manner. Importantly, we found that phase synchronization was not correlated with changes in neuronal firing rate. Furthermore, we found that these relationships were not exclusive to motor cortex, because STN firing also demonstrated phase synchronization to both premotor and sensory cortex. The data indicate that models of basal ganglia function ultimately will need to account for the activity of populations of STN neurons that are bound in distinct functional networks with both motor and sensory cortices and code for movement parameters independent of changes in firing rate. NEW & NOTEWORTHY Current models of basal ganglia-thalamocortical networks do not adequately explain simple motor functions, let alone dysfunction in movement disorders. Our findings provide data that inform models of human basal ganglia function by demonstrating how movement is encoded by networks of subthalamic nucleus (STN) neurons via dynamic phase synchronization with cortex. The data also

  8. Intonational speech prosody encoding in the human auditory cortex.

    Science.gov (United States)

    Tang, C; Hamilton, L S; Chang, E F

    2017-08-25

    Speakers of all human languages regularly use intonational pitch to convey linguistic meaning, such as to emphasize a particular word. Listeners extract pitch movements from speech and evaluate the shape of intonation contours independent of each speaker's pitch range. We used high-density electrocorticography to record neural population activity directly from the brain surface while participants listened to sentences that varied in intonational pitch contour, phonetic content, and speaker. Cortical activity at single electrodes over the human superior temporal gyrus selectively represented intonation contours. These electrodes were intermixed with, yet functionally distinct from, sites that encoded different information about phonetic features or speaker identity. Furthermore, the representation of intonation contours directly reflected the encoding of speaker-normalized relative pitch but not absolute pitch. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  9. Sensory information in local field potentials and spikes from visual and auditory cortices: time scales and frequency bands.

    Science.gov (United States)

    Belitski, Andrei; Panzeri, Stefano; Magri, Cesare; Logothetis, Nikos K; Kayser, Christoph

    2010-12-01

    Studies analyzing sensory cortical processing or trying to decode brain activity often rely on a combination of different electrophysiological signals, such as local field potentials (LFPs) and spiking activity. Understanding the relation between these signals and sensory stimuli and between different components of these signals is hence of great interest. We here provide an analysis of LFPs and spiking activity recorded from visual and auditory cortex during stimulation with natural stimuli. In particular, we focus on the time scales on which different components of these signals are informative about the stimulus, and on the dependencies between different components of these signals. Addressing the first question, we find that stimulus information in low frequency bands (50 Hz), in contrast, is scale dependent, and is larger when the energy is averaged over several hundreds of milliseconds. Indeed, combined analysis of signal reliability and information revealed that the energy of slow LFP fluctuations is well related to the stimulus even when considering individual or few cycles, while the energy of fast LFP oscillations carries information only when averaged over many cycles. Addressing the second question, we find that stimulus information in different LFP bands, and in different LFP bands and spiking activity, is largely independent regardless of time scale or sensory system. Taken together, these findings suggest that different LFP bands represent dynamic natural stimuli on distinct time scales and together provide a potentially rich source of information for sensory processing or decoding brain activity.

  10. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    Science.gov (United States)

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  11. Knockdown of the dyslexia-associated gene Kiaa0319 impairs temporal responses to speech stimuli in rat primary auditory cortex.

    Science.gov (United States)

    Centanni, T M; Booker, A B; Sloan, A M; Chen, F; Maher, B J; Carraway, R S; Khodaparast, N; Rennaker, R; LoTurco, J J; Kilgard, M P

    2014-07-01

    One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Auditory evoked potentials: predicting speech therapy outcomes in children with phonological disorders

    Directory of Open Access Journals (Sweden)

    Renata Aparecida Leite

    2014-03-01

    Full Text Available OBJECTIVES: This study investigated whether neurophysiologic responses (auditory evoked potentials differ between typically developed children and children with phonological disorders and whether these responses are modified in children with phonological disorders after speech therapy. METHODS: The participants included 24 typically developing children (Control Group, mean age: eight years and ten months and 23 children clinically diagnosed with phonological disorders (Study Group, mean age: eight years and eleven months. Additionally, 12 study group children were enrolled in speech therapy (Study Group 1, and 11 were not enrolled in speech therapy (Study Group 2. The subjects were submitted to the following procedures: conventional audiological, auditory brainstem response, auditory middle-latency response, and P300 assessments. All participants presented with normal hearing thresholds. The study group 1 subjects were reassessed after 12 speech therapy sessions, and the study group 2 subjects were reassessed 3 months after the initial assessment. Electrophysiological results were compared between the groups. RESULTS: Latency differences were observed between the groups (the control and study groups regarding the auditory brainstem response and the P300 tests. Additionally, the P300 responses improved in the study group 1 children after speech therapy. CONCLUSION: The findings suggest that children with phonological disorders have impaired auditory brainstem and cortical region pathways that may benefit from speech therapy.

  13. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  14. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    Science.gov (United States)

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  15. Catecholaminergic consolidation of motor cortical neuroplasticity in humans.

    Science.gov (United States)

    Nitsche, Michael A; Grundey, Jessica; Liebetanz, David; Lang, Nicolas; Tergau, Frithjof; Paulus, Walter

    2004-11-01

    Amphetamine, a catecholaminergic re-uptake-blocker, is able to improve neuroplastic mechanisms in humans. However, so far not much is known about the underlying physiological mechanisms. Here, we study the impact of amphetamine on NMDA receptor-dependent long-lasting excitability modifications in the human motor cortex elicited by weak transcranial direct current stimulation (tDCS). Amphetamine significantly enhanced and prolonged increases in anodal, tDCS-induced, long-lasting excitability. Under amphetamine premedication, anodal tDCS resulted in an enhancement of excitability which lasted until the morning after tDCS, compared to approximately 1 h in the placebo condition. Prolongation of the excitability enhancement was most pronounced for long-term effects; the duration of short-term excitability enhancement was only slightly increased. Since the additional application of the NMDA receptor antagonist dextromethorphane blocked any enhancement of tDCS-driven excitability under amphetamine, we conclude that amphetamine consolidates the tDCS-induced neuroplastic effects, but does not initiate them. The fact that propanolol, a beta-adrenergic antagonist, diminished the duration of the tDCS-generated after-effects suggests that adrenergic receptors play a certain role in the consolidation of NMDA receptor-dependent motor cortical excitability modifications in humans. This result may enable researchers to optimize neuroplastic processes in the human brain on the rational basis of purpose-designed pharmacological interventions.

  16. Plasticity of the human auditory cortex related to musical training.

    Science.gov (United States)

    Pantev, Christo; Herholz, Sibylle C

    2011-11-01

    During the last decades music neuroscience has become a rapidly growing field within the area of neuroscience. Music is particularly well suited for studying neuronal plasticity in the human brain because musical training is more complex and multimodal than most other daily life activities, and because prospective and professional musicians usually pursue the training with high and long-lasting commitment. Therefore, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music relates to many brain functions like perception, action, cognition, emotion, learning and memory and therefore music is an ideal tool to investigate how the human brain is working and how different brain functions interact. Novel findings have been obtained in the field of induced cortical plasticity by musical training. The positive effects, which music in its various forms has in the healthy human brain are not only important in the framework of basic neuroscience, but they also will strongly affect the practices in neuro-rehabilitation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Self vs. other: neural correlates underlying agent identification based on unimodal auditory information as revealed by electrotomography (sLORETA).

    Science.gov (United States)

    Justen, C; Herbert, C; Werner, K; Raab, M

    2014-02-14

    Recent neuroscientific studies have identified activity changes in an extensive cerebral network consisting of medial prefrontal cortex, precuneus, temporo-parietal junction, and temporal pole during the perception and identification of self- and other-generated stimuli. Because this network is supposed to be engaged in tasks which require agent identification, it has been labeled the evaluation network (e-network). The present study used self- versus other-generated movement sounds (long jumps) and electroencephalography (EEG) in order to unravel the neural dynamics of agent identification for complex auditory information. Participants (N=14) performed an auditory self-other identification task with EEG. Data was then subjected to a subsequent standardized low-resolution brain electromagnetic tomography (sLORETA) analysis (source localization analysis). Differences between conditions were assessed using t-statistics (corrected for multiple testing) on the normalized and log-transformed current density values of the sLORETA images. Three-dimensional sLORETA source localization analysis revealed cortical activations in brain regions mostly associated with the e-network, especially in the medial prefrontal cortex (bilaterally in the alpha-1-band and right-lateralized in the gamma-band) and the temporo-parietal junction (right hemisphere in the alpha-1-band). Taken together, the findings are partly consistent with previous functional neuroimaging studies investigating unimodal visual or multimodal agent identification tasks (cf. e-network) and extent them to the auditory domain. Cortical activations in brain regions of the e-network seem to have functional relevance, especially the significantly higher cortical activation in the right medial prefrontal cortex. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    Science.gov (United States)

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  20. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  1. Human occipital cortices differentially exert saccadic suppression: intracranial recording in children

    Science.gov (United States)

    Uematsu, Mitsugu; Matsuzaki, Naoyuki; Brown, Erik C.; Kojima, Katsuaki; Asano, Eishi

    2013-01-01

    By repeating saccades unconsciously, humans explore the surrounding world every day. Saccades inevitably move external visual images across the retina at high velocity; nonetheless, healthy humans don’t perceive transient blurring of the visual scene during saccades. This perceptual stability is referred to as saccadic suppression. Functional suppression is believed to take place transiently in the visual systems, but it remains unknown how commonly or differentially the human occipital lobe activities are suppressed at the large-scale cortical network level. We determined the spatial-temporal dynamics of intracranially-recorded gamma activity at 80–150 Hz around spontaneous saccades under no-task conditions during wakefulness and those in darkness during REM sleep. Regardless of wakefulness or REM sleep, a small degree of attenuation of gamma activity was noted in the occipital regions during saccades, most extensively in the polar and least in the medial portions. Longer saccades were associated with more intense gamma-attenuation. Gamma-attenuation was subsequently followed by gamma-augmentation most extensively involving the medial and least involving the polar occipital region. Such gamma-augmentation was more intense during wakefulness and temporally locked to the offset of saccades. The polarities of initial peaks of perisaccadic event-related potentials (ERPs) were frequently positive in the medial and negative in the polar occipital regions. The present study, for the first time, provided the electrophysiological evidence that human occipital cortices differentially exert peri-saccadic modulation. Transiently suppressed sensitivity of the primary visual cortex in the polar region may be an important neural basis for saccadic suppression. Presence of occipital gamma-attenuation even during REM sleep suggests that saccadic suppression might be exerted even without external visual inputs. The primary visual cortex in the medial region, compared to the

  2. A genome-wide search for quantitative trait loci affecting the cortical surface area and thickness of Heschl's gyrus

    NARCIS (Netherlands)

    Cai, D.C.; Fonteijn, H.M.; Guadalupe, T.M.; Zwiers, M.P.; Wittfeld, K.; Teumer, A.; Hoogman, M.; Arias Vasquez, A.; Yang, Y; Buitelaar, J.K.; Fernandez, G.S.E.; Brunner, H.G.; Bokhoven, H. van; Franke, B.; Hegenscheid, K.; Homuth, G.; Fisher, S.E.; Grabe, H.J.; Francks, C.; Hagoort, P.

    2014-01-01

    Heschl's gyrus (HG) is a core region of the auditory cortex whose morphology is highly variable across individuals. This variability has been linked to sound perception ability in both speech and music domains. Previous studies show that variations in morphological features of HG, such as cortical

  3. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  4. On the homogeneity and heterogeneity of cortical thickness profiles in Homo sapiens sapiens.

    Science.gov (United States)

    Koten, Jan Willem; Schüppen, André; Morozova, Maria; Lehofer, Agnes; Koschutnig, Karl; Wood, Guilherme

    2017-12-20

    Cortical thickness has been investigated since the beginning of the 20th century, but we do not know how similar the cortical thickness profiles among humans are. In this study, the local similarity of cortical thickness profiles was investigated using sliding window methods. Here, we show that approximately 5% of the cortical thickness profiles are similarly expressed among humans while 45% of the cortical thickness profiles show a high level of heterogeneity. Therefore, heterogeneity is the rule, not the exception. Cortical thickness profiles of somatosensory homunculi and the anterior insula are consistent among humans, while the cortical thickness profiles of the motor homunculus are more variable. Cortical thickness profiles of homunculi that code for muscle position and skin stimulation are highly similar among humans despite large differences in sex, education, and age. This finding suggests that the structure of these cortices remains well preserved over a lifetime. Our observations possibly relativize opinions on cortical plasticity.

  5. Aging and Fracture of Human Cortical Bone and Tooth Dentin

    Energy Technology Data Exchange (ETDEWEB)

    Ager, Joel; Koester, Kurt J.; Ager III, Joel W.; Ritchie, Robert O.

    2008-05-07

    Mineralized tissues, such as bone and tooth dentin, serve as structural materials in the human body and, as such, have evolved to resist fracture. In assessing their quantitative fracture resistance or toughness, it is important to distinguish between intrinsic toughening mechanisms which function ahead of the crack tip, such as plasticity in metals, and extrinsic mechanisms which function primarily behind the tip, such as crack bridging in ceramics. Bone and dentin derive their resistance to fracture principally from extrinsic toughening mechanisms which have their origins in the hierarchical microstructure of these mineralized tissues. Experimentally, quantification of these toughening mechanisms requires a crack-growth resistance approach, which can be achieved by measuring the crack-driving force, e.g., the stress intensity, as a function of crack extension ("R-curve approach"). Here this methodology is used to study of the effect of aging on the fracture properties of human cortical bone and human dentin in order to discern the microstructural origins of toughness in these materials.

  6. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    Science.gov (United States)

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Cortical activity during cued picture naming predicts individual differences in stuttering frequency.

    Science.gov (United States)

    Mock, Jeffrey R; Foundas, Anne L; Golob, Edward J

    2016-09-01

    Developmental stuttering is characterized by fluent speech punctuated by stuttering events, the frequency of which varies among individuals and contexts. Most stuttering events occur at the beginning of an utterance, suggesting neural dynamics associated with stuttering may be evident during speech preparation. This study used EEG to measure cortical activity during speech preparation in men who stutter, and compared the EEG measures to individual differences in stuttering rate as well as to a fluent control group. Each trial contained a cue followed by an acoustic probe at one of two onset times (early or late), and then a picture. There were two conditions: a speech condition where cues induced speech preparation of the picture's name and a control condition that minimized speech preparation. Across conditions stuttering frequency correlated to cue-related EEG beta power and auditory ERP slow waves from early onset acoustic probes. The findings reveal two new cortical markers of stuttering frequency that were present in both conditions, manifest at different times, are elicited by different stimuli (visual cue, auditory probe), and have different EEG responses (beta power, ERP slow wave). The cue-target paradigm evoked brain responses that correlated to pre-experimental stuttering rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Effects of long-term non-traumatic noise exposure on the adult central auditory system. Hearing problems without hearing loss.

    Science.gov (United States)

    Eggermont, Jos J

    2017-09-01

    It is known that hearing loss induces plastic changes in the brain, causing loudness recruitment and hyperacusis, increased spontaneous firing rates and neural synchrony, reorganizations of the cortical tonotopic maps, and tinnitus. Much less in known about the central effects of exposure to sounds that cause a temporary hearing loss, affect the ribbon synapses in the inner hair cells, and cause a loss of high-threshold auditory nerve fibers. In contrast there is a wealth of information about central effects of long-duration sound exposures at levels ≤80 dB SPL that do not even cause a temporary hearing loss. The central effects for these moderate level exposures described in this review include changes in central gain, increased spontaneous firing rates and neural synchrony, and reorganization of the cortical tonotopic map. A putative mechanism is outlined, and the effect of the acoustic environment during the recovery process is illustrated. Parallels are drawn with hearing problems in humans with long-duration exposures to occupational noise but with clinical normal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Speech rhythms and multiplexed oscillatory sensory coding in the human brain.

    Directory of Open Access Journals (Sweden)

    Joachim Gross

    2013-12-01

    Full Text Available Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta and the amplitude of high-frequency (gamma oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations.

  10. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    Science.gov (United States)

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  11. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  12. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    Science.gov (United States)

    Ioannou, Christos I; Pereda, Ernesto; Lindsen, Job P; Bhattacharya, Joydeep

    2015-01-01

    The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB). The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min) of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha) binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  13. Electrical Brain Responses to an Auditory Illusion and the Impact of Musical Expertise.

    Directory of Open Access Journals (Sweden)

    Christos I Ioannou

    Full Text Available The presentation of two sinusoidal tones, one to each ear, with a slight frequency mismatch yields an auditory illusion of a beating frequency equal to the frequency difference between the two tones; this is known as binaural beat (BB. The effect of brief BB stimulation on scalp EEG is not conclusively demonstrated. Further, no studies have examined the impact of musical training associated with BB stimulation, yet musicians' brains are often associated with enhanced auditory processing. In this study, we analysed EEG brain responses from two groups, musicians and non-musicians, when stimulated by short presentation (1 min of binaural beats with beat frequency varying from 1 Hz to 48 Hz. We focused our analysis on alpha and gamma band EEG signals, and they were analysed in terms of spectral power, and functional connectivity as measured by two phase synchrony based measures, phase locking value and phase lag index. Finally, these measures were used to characterize the degree of centrality, segregation and integration of the functional brain network. We found that beat frequencies belonging to alpha band produced the most significant steady-state responses across groups. Further, processing of low frequency (delta, theta, alpha binaural beats had significant impact on cortical network patterns in the alpha band oscillations. Altogether these results provide a neurophysiological account of cortical responses to BB stimulation at varying frequencies, and demonstrate a modulation of cortico-cortical connectivity in musicians' brains, and further suggest a kind of neuronal entrainment of a linear and nonlinear relationship to the beating frequencies.

  14. Lifespan anxiety is reflected in human amygdala cortical connectivity

    Science.gov (United States)

    He, Ye; Xu, Ting; Zhang, Wei

    2016-01-01

    Abstract The amygdala plays a pivotal role in processing anxiety and connects to large‐scale brain networks. However, intrinsic functional connectivity (iFC) between amygdala and these networks has rarely been examined in relation to anxiety, especially across the lifespan. We employed resting‐state functional MRI data from 280 healthy adults (18–83.5 yrs) to elucidate the relationship between anxiety and amygdala iFC with common cortical networks including the visual network, somatomotor network, dorsal attention network, ventral attention network, limbic network, frontoparietal network, and default network. Global and network‐specific iFC were separately computed as mean iFC of amygdala with the entire cerebral cortex and each cortical network. We detected negative correlation between global positive amygdala iFC and trait anxiety. Network‐specific associations between amygdala iFC and anxiety were also detectable. Specifically, the higher iFC strength between the left amygdala and the limbic network predicted lower state anxiety. For the trait anxiety, left amygdala anxiety–connectivity correlation was observed in both somatomotor and dorsal attention networks, whereas the right amygdala anxiety–connectivity correlation was primarily distributed in the frontoparietal and ventral attention networks. Ventral attention network exhibited significant anxiety–gender interactions on its iFC with amygdala. Together with findings from additional vertex‐wise analysis, these data clearly indicated that both low‐level sensory networks and high‐level associative networks could contribute to detectable predictions of anxiety behaviors by their iFC profiles with the amygdala. This set of systems neuroscience findings could lead to novel functional network models on neural correlates of human anxiety and provide targets for novel treatment strategies on anxiety disorders. Hum Brain Mapp 37:1178–1193, 2016. © 2015 The Authors Human Brain Mapping

  15. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  17. Auditory processing in absolute pitch possessors

    Science.gov (United States)

    McKetton, Larissa; Schneider, Keith A.

    2018-05-01

    Absolute pitch (AP) is a rare ability in classifying a musical pitch without a reference standard. It has been of great interest to researchers studying auditory processing and music cognition since it is seldom expressed and sheds light on influences pertaining to neurodevelopmental biological predispositions and the onset of musical training. We investigated the smallest frequency that could be detected or just noticeable difference (JND) between two pitches. Here, we report significant differences in JND thresholds in AP musicians and non-AP musicians compared to non-musician control groups at both 1000 Hz and 987.76 Hz testing frequencies. Although the AP-musicians did better than non-AP musicians, the difference was not significant. In addition, we looked at neuro-anatomical correlates of musicianship and AP using structural MRI. We report increased cortical thickness of the left Heschl's Gyrus (HG) and decreased cortical thickness of the inferior frontal opercular gyrus (IFO) and circular insular sulcus volume (CIS) in AP compared to non-AP musicians and controls. These structures may therefore be optimally enhanced and reduced to form the most efficient network for AP to emerge.

  18. Common cortical responses evoked by appearance, disappearance and change of the human face

    Directory of Open Access Journals (Sweden)

    Kida Tetsuo

    2009-04-01

    Full Text Available Abstract Background To segregate luminance-related, face-related and non-specific components involved in spatio-temporal dynamics of cortical activations to a face stimulus, we recorded cortical responses to face appearance (Onset, disappearance (Offset, and change (Change using magnetoencephalography. Results Activity in and around the primary visual cortex (V1/V2 showed luminance-dependent behavior. Any of the three events evoked activity in the middle occipital gyrus (MOG at 150 ms and temporo-parietal junction (TPJ at 250 ms after the onset of each event. Onset and Change activated the fusiform gyrus (FG, while Offset did not. This FG activation showed a triphasic waveform, consistent with results of intracranial recordings in humans. Conclusion Analysis employed in this study successfully segregated four different elements involved in the spatio-temporal dynamics of cortical activations in response to a face stimulus. The results show the responses of MOG and TPJ to be associated with non-specific processes, such as the detection of abrupt changes or exogenous attention. Activity in FG corresponds to a face-specific response recorded by intracranial studies, and that in V1/V2 is related to a change in luminance.

  19. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  20. Post training REMs coincident auditory stimulation enhances memory in humans.

    Science.gov (United States)

    Smith, C; Weeden, K

    1990-06-01

    Sleep activity was monitored in 20 freshman college students for two consecutive nights. Subjects were assigned to 4 equal groups and all were asked to learn a complex logic task before bed on the second night. Two groups of subjects learned the task with a constant clicking noise in the background (cued groups), while two groups simply learned the task (non cued). During the night, one cued and one non cued group were presented with auditory clicks during REM sleep such as to coincide with all REMs of at least 100 microvolts. The second cued group was given auditory clicks during REM sleep, but only during the REMs "quiet" times. The second non-cued control group was never given any nighttime auditory stimulations. The cued REMs coincident group showed a significant 23% improvement in task performance when tested one week later. The non cued REMs coincident group showed only an 8.8% improvement which was not significant. The cued REMs quiet and non-stimulated control groups showed no change in task performance when retested. The results were interpreted as support for the idea that the cued auditory stimulation induced a "recall" of the learned material during the REM sleep state in order for further memory processing to take place.

  1. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    Science.gov (United States)

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  2. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    Science.gov (United States)

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  3. Auditory conflict resolution correlates with medial-lateral frontal theta/alpha phase synchrony.

    Science.gov (United States)

    Huang, Samantha; Rossi, Stephanie; Hämäläinen, Matti; Ahveninen, Jyrki

    2014-01-01

    When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC) and dorsolateral prefrontal cortices (DLPFC), work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right), sounding out either the letters "A" or "O". They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50%) or incongruent (50%) with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF) and DLPFC, as well as by increased pre-stimulus gamma (60-110 Hz) power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance.

  4. Auditory Conflict Resolution Correlates with Medial–Lateral Frontal Theta/Alpha Phase Synchrony

    Science.gov (United States)

    Huang, Samantha; Rossi, Stephanie; Hämäläinen, Matti; Ahveninen, Jyrki

    2014-01-01

    When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC) and dorsolateral prefrontal cortices (DLPFC), work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right), sounding out either the letters “A” or “O”. They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50%) or incongruent (50%) with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF) and DLPFC, as well as by increased pre-stimulus gamma (60–110 Hz) power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance. PMID:25343503

  5. Auditory conflict resolution correlates with medial-lateral frontal theta/alpha phase synchrony.

    Directory of Open Access Journals (Sweden)

    Samantha Huang

    Full Text Available When multiple persons speak simultaneously, it may be difficult for the listener to direct attention to correct sound objects among conflicting ones. This could occur, for example, in an emergency situation in which one hears conflicting instructions and the loudest, instead of the wisest, voice prevails. Here, we used cortically-constrained oscillatory MEG/EEG estimates to examine how different brain regions, including caudal anterior cingulate (cACC and dorsolateral prefrontal cortices (DLPFC, work together to resolve these kinds of auditory conflicts. During an auditory flanker interference task, subjects were presented with sound patterns consisting of three different voices, from three different directions (45° left, straight ahead, 45° right, sounding out either the letters "A" or "O". They were asked to discriminate which sound was presented centrally and ignore the flanking distracters that were phonetically either congruent (50% or incongruent (50% with the target. Our cortical MEG/EEG oscillatory estimates demonstrated a direct relationship between performance and brain activity, showing that efficient conflict resolution, as measured with reduced conflict-induced RT lags, is predicted by theta/alpha phase coupling between cACC and right lateral frontal cortex regions intersecting the right frontal eye fields (FEF and DLPFC, as well as by increased pre-stimulus gamma (60-110 Hz power in the left inferior fontal cortex. Notably, cACC connectivity patterns that correlated with behavioral conflict-resolution measures were found during both the pre-stimulus and the pre-response periods. Our data provide evidence that, instead of being only transiently activated upon conflict detection, cACC is involved in sustained engagement of attentional resources required for effective sound object selection performance.

  6. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  7. Diffusion tensor imaging and MR morphometry of the central auditory pathway and auditory cortex in aging.

    Science.gov (United States)

    Profant, O; Škoch, A; Balogová, Z; Tintěra, J; Hlinka, J; Syka, J

    2014-02-28

    Age-related hearing loss (presbycusis) is caused mainly by the hypofunction of the inner ear, but recent findings point also toward a central component of presbycusis. We used MR morphometry and diffusion tensor imaging (DTI) with a 3T MR system with the aim to study the state of the central auditory system in a group of elderly subjects (>65years) with mild presbycusis, in a group of elderly subjects with expressed presbycusis and in young controls. Cortical reconstruction, volumetric segmentation and auditory pathway tractography were performed. Three parameters were evaluated by morphometry: the volume of the gray matter, the surface area of the gyrus and the thickness of the cortex. In all experimental groups the surface area and gray matter volume were larger on the left side in Heschl's gyrus and planum temporale and slightly larger in the gyrus frontalis superior, whereas they were larger on the right side in the primary visual cortex. Almost all of the measured parameters were significantly smaller in the elderly subjects in Heschl's gyrus, planum temporale and gyrus frontalis superior. Aging did not change the side asymmetry (laterality) of the gyri. In the central part of the auditory pathway above the inferior colliculus, a trend toward an effect of aging was present in the axial vector of the diffusion (L1) variable of DTI, with increased values observed in elderly subjects. A trend toward a decrease of L1 on the left side, which was more pronounced in the elderly groups, was observed. The effect of hearing loss was present in subjects with expressed presbycusis as a trend toward an increase of the radial vectors (L2L3) in the white matter under Heschl's gyrus. These results suggest that in addition to peripheral changes, changes in the central part of the auditory system in elderly subjects are also present; however, the extent of hearing loss does not play a significant role in the central changes. Copyright © 2013 IBRO. Published by Elsevier Ltd

  8. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  10. Connections underlying the synthesis of cognition, memory, and emotion in primate prefrontal cortices.

    Science.gov (United States)

    Barbas, H

    2000-07-15

    Distinct domains of the prefrontal cortex in primates have a set of connections suggesting that they have different roles in cognition, memory, and emotion. Caudal lateral prefrontal areas (areas 8 and 46) receive projections from cortices representing early stages in visual or auditory processing, and from intraparietal and posterior cingulate areas associated with oculomotor guidance and attentional processes. Cortical input to areas 46 and 8 is complemented by projections from the thalamic multiform and parvicellular sectors of the mediodorsal nucleus associated with oculomotor functions and working memory. In contrast, caudal orbitofrontal areas receive diverse input from cortices representing late stages of processing within every unimodal sensory cortical system. In addition, orbitofrontal and caudal medial (limbic) prefrontal cortices receive robust projections from the amygdala, associated with emotional memory, and from medial temporal and thalamic structures associated with long-term memory. Prefrontal cortices are linked with motor control structures related to their specific roles in central executive functions. Caudal lateral prefrontal areas project to brainstem oculomotor structures, and are connected with premotor cortices effecting head, limb and body movements. In contrast, medial prefrontal and orbitofrontal limbic cortices project to hypothalamic visceromotor centers for the expression of emotions. Lateral, orbitofrontal, and medial prefrontal cortices are robustly interconnected, suggesting that they participate in concert in central executive functions. Prefrontal limbic cortices issue widespread projections through their deep layers and terminate in the upper layers of lateral (eulaminate) cortices, suggesting a predominant role in feedback communication. In contrast, when lateral prefrontal cortices communicate with limbic areas they issue projections from their upper layers and their axons terminate in the deep layers, suggesting a role in

  11. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  12. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines

    Directory of Open Access Journals (Sweden)

    Ruijie Li

    2018-04-01

    Full Text Available In vivo two-photon Ca2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca2+ indicators, we recorded the Ca2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca2+ indicators (GECIs, we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors.

  13. A Model of Representational Spaces in Human Cortex.

    Science.gov (United States)

    Guntupalli, J Swaroop; Hanke, Michael; Halchenko, Yaroslav O; Connolly, Andrew C; Ramadge, Peter J; Haxby, James V

    2016-06-01

    Current models of the functional architecture of human cortex emphasize areas that capture coarse-scale features of cortical topography but provide no account for population responses that encode information in fine-scale patterns of activity. Here, we present a linear model of shared representational spaces in human cortex that captures fine-scale distinctions among population responses with response-tuning basis functions that are common across brains and models cortical patterns of neural responses with individual-specific topographic basis functions. We derive a common model space for the whole cortex using a new algorithm, searchlight hyperalignment, and complex, dynamic stimuli that provide a broad sampling of visual, auditory, and social percepts. The model aligns representations across brains in occipital, temporal, parietal, and prefrontal cortices, as shown by between-subject multivariate pattern classification and intersubject correlation of representational geometry, indicating that structural principles for shared neural representations apply across widely divergent domains of information. The model provides a rigorous account for individual variability of well-known coarse-scale topographies, such as retinotopy and category selectivity, and goes further to account for fine-scale patterns that are multiplexed with coarse-scale topographies and carry finer distinctions. © The Author 2016. Published by Oxford University Press.

  14. Auditory memory can be object based.

    Science.gov (United States)

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  15. Model cortical association fields account for the time course and dependence on target complexity of human contour perception.

    Directory of Open Access Journals (Sweden)

    Vadas Gintautas

    2011-10-01

    Full Text Available Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas distributed among groups of randomly rotated fragments (clutter. The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms, followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.

  16. Vision first? The development of primary visual cortical networks is more rapid than the development of primary motor networks in humans.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available The development of cortical functions and the capacity of the mature brain to learn are largely determined by the establishment and maintenance of neocortical networks. Here we address the human development of long-range connectivity in primary visual and motor cortices, using well-established behavioral measures--a Contour Integration test and a Finger-tapping task--that have been shown to be related to these specific primary areas, and the long-range neural connectivity within those. Possible confounding factors, such as different task requirements (complexity, cognitive load are eliminated by using these tasks in a learning paradigm. We find that there is a temporal lag between the developmental timing of primary sensory vs. motor areas with an advantage of visual development; we also confirm that human development is very slow in both cases, and that there is a retained capacity for practice induced plastic changes in adults. This pattern of results seems to point to human-specific development of the "canonical circuits" of primary sensory and motor cortices, probably reflecting the ecological requirements of human life.

  17. The effect of binaural beats on verbal working memory and cortical connectivity.

    Science.gov (United States)

    Beauchene, Christine; Abaid, Nicole; Moran, Rosalyn; Diana, Rachel A; Leonessa, Alexander

    2017-04-01

    Synchronization in activated regions of cortical networks affect the brain's frequency response, which has been associated with a wide range of states and abilities, including memory. A non-invasive method for manipulating cortical synchronization is binaural beats. Binaural beats take advantage of the brain's response to two pure tones, delivered independently to each ear, when those tones have a small frequency mismatch. The mismatch between the tones is interpreted as a beat frequency, which may act to synchronize cortical oscillations. Neural synchrony is particularly important for working memory processes, the system controlling online organization and retention of information for successful goal-directed behavior. Therefore, manipulation of synchrony via binaural beats provides a unique window into working memory and associated connectivity of cortical networks. In this study, we examined the effects of different acoustic stimulation conditions during an N-back working memory task, and we measured participant response accuracy and cortical network topology via EEG recordings. Six acoustic stimulation conditions were used: None, Pure Tone, Classical Music, 5 Hz binaural beats, 10 Hz binaural beats, and 15 Hz binaural beats. We determined that listening to 15 Hz binaural beats during an N-Back working memory task increased the individual participant's accuracy, modulated the cortical frequency response, and changed the cortical network connection strengths during the task. Only the 15 Hz binaural beats produced significant change in relative accuracy compared to the None condition. Listening to 15 Hz binaural beats during the N-back task activated salient frequency bands and produced networks characterized by higher information transfer as compared to other auditory stimulation conditions.

  18. Auditory cortex involvement in emotional learning and memory.

    Science.gov (United States)

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    Science.gov (United States)

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  20. From Hearing Sounds to Recognizing Phonemes: Primary Auditory Cortex is A Truly Perceptual Language Area

    Directory of Open Access Journals (Sweden)

    Byron Bernal

    2016-11-01

    Full Text Available The aim of this article is to present a systematic review about the anatomy, function, connectivity, and functional activation of the primary auditory cortex (PAC (Brodmann areas 41/42 when involved in language paradigms. PAC activates with a plethora of diverse basic stimuli including but not limited to tones, chords, natural sounds, consonants, and speech. Nonetheless, the PAC shows specific sensitivity to speech. Damage in the PAC is associated with so-called “pure word-deafness” (“auditory verbal agnosia”. BA41, and to a lesser extent BA42, are involved in early stages of phonological processing (phoneme recognition. Phonological processing may take place in either the right or left side, but customarily the left exerts an inhibitory tone over the right, gaining dominance in function. BA41/42 are primary auditory cortices harboring complex phoneme perception functions with asymmetrical expression, making it possible to include them as core language processing areas (Wernicke’s area.

  1. Developmental changes in human dopamine neurotransmission: cortical receptors and terminators

    Directory of Open Access Journals (Sweden)

    Rothmond Debora A

    2012-02-01

    Full Text Available Abstract Background Dopamine is integral to cognition, learning and memory, and dysfunctions of the frontal cortical dopamine system have been implicated in several developmental neuropsychiatric disorders. The dorsolateral prefrontal cortex (DLPFC is critical for working memory which does not fully mature until the third decade of life. Few studies have reported on the normal development of the dopamine system in human DLPFC during postnatal life. We assessed pre- and postsynaptic components of the dopamine system including tyrosine hydroxylase, the dopamine receptors (D1, D2 short and D2 long isoforms, D4, D5, catechol-O-methyltransferase, and monoamine oxidase (A and B in the developing human DLPFC (6 weeks -50 years. Results Gene expression was first analysed by microarray and then by quantitative real-time PCR. Protein expression was analysed by western blot. Protein levels for tyrosine hydroxylase peaked during the first year of life (p O-methyltransferase (p = 0.024 were significantly higher in neonates and infants as was catechol-O-methyltransferase protein (32 kDa, p = 0.027. In contrast, dopamine D1 receptor mRNA correlated positively with age (p = 0.002 and dopamine D1 receptor protein expression increased throughout development (p Conclusions We find distinct developmental changes in key components of the dopamine system in DLPFC over postnatal life. Those genes that are highly expressed during the first year of postnatal life may influence and orchestrate the early development of cortical neural circuitry while genes portraying a pattern of increasing expression with age may indicate a role in DLPFC maturation and attainment of adult levels of cognitive function.

  2. Oscillatory decoupling differentiates auditory encoding deficits in children with listening problems.

    Science.gov (United States)

    Gilley, Phillip M; Sharma, Mridula; Purdy, Suzanne C

    2016-02-01

    We sought to examine whether oscillatory EEG responses to a speech stimulus in both quiet and noise were different in children with listening problems than in children with normal hearing. We employed a high-resolution spectral-temporal analysis of the cortical auditory evoked potential in response to a 150 ms speech sound /da/ in quiet and 3 dB SNR in 21 typically developing children (mean age=10.7 years, standard deviation=1.7) and 44 children with reported listening problems (LP) with absence of hearing loss (mean age=10.3 years, standard deviation=1.6). Children with LP were assessed for auditory processing disorder (APD) by which 24 children had APD, and 20 children did not. Peak latencies, magnitudes, and frequencies were compared between these groups. Children with LP had frequency shifts in the theta, and alpha bands (plistening problems in this population of children. Published by Elsevier Ireland Ltd.

  3. Hierarchical differences in population coding within auditory cortex.

    Science.gov (United States)

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2017-08-01

    Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their

  4. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  5. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex.

    Science.gov (United States)

    Albouy, Philippe; Mattout, Jérémie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaëtan; Aguera, Pierre-Emmanuel; Daligault, Sébastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara

    2013-05-01

    Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a melodic contour task and an easier transposition task); they had to indicate whether sequences of six tones (presented in pairs) were the same or different. Behavioural data indicated that in comparison with control participants, amusics' short-term memory was impaired for the melodic contour task, but not for the transposition task. The major finding was that pitch processing and short-term memory deficits can be traced down to amusics' early brain responses during encoding of the melodic information. Temporal and frontal generators of the N100m evoked by each note of the melody were abnormally recruited in the amusic brain. Dynamic causal modelling of the N100m further revealed decreased intrinsic connectivity in both auditory cortices, increased lateral connectivity between auditory cortices as well as a decreased right fronto-temporal backward connectivity in amusics relative to control subjects. Abnormal functioning of this fronto-temporal network was also shown during the retention interval and the retrieval of melodic information. In particular, induced gamma oscillations in right frontal areas were decreased in amusics during the retention interval. Using voxel-based morphometry, we confirmed morphological brain anomalies in terms of white and grey matter concentration in the right inferior frontal gyrus and the right superior temporal gyrus in the amusic brain. The convergence between functional and structural brain differences strengthens the hypothesis of abnormalities in the fronto-temporal pathway of the amusic brain. Our data provide first evidence of altered functioning of the auditory cortices during pitch

  7. Early continuous white noise exposure alters auditory spatial sensitivity and expression of GAD65 and GABAA receptor subunits in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2010-04-01

    Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.

  8. Assessment of cortical dysfunction in human strabismic amblyopia using magnetoencephalography (MEG)

    International Nuclear Information System (INIS)

    Anderson, S.J.; Holliday, I.E.; Harding, G.F.A.

    1999-01-01

    The aim of this study was to use the technique of magnetoencephalography (MEG) to determine the effects of strabismic amblyopia on the processing of spatial information within the occipital cortex of humans. We recorded evoked magnetic responses to the onset of a chromatic (red/green) sinusoidal grating of periodicity 0.5-4.0 c deg -1 using a 19-channel SQUID-based neuromagnetometer. Evoked responses were recorded monocularly on six amblyopes and six normally-sighted controls, the stimuli being positioned near the fovea in the lower right visual field of each observer. For comparison, the spatial contrast sensitivity function (CSF) for the detection of chromatic gratings was measured for one amblyope and one control using a two alternate forced-choice psychophysical procedure. We chose red/green sinusoids as our stimuli because they evoke strong magnetic responses from the occipital cortex in adult humans (Fylan, Holliday, Singh, Anderson and Harding. (1997). Neuroimage, 6, 47-57). Magnetic field strength was plotted as a function of stimulus spatial frequency for each eye of each subject. Interocular differences were only evident within the amblyopic group: for stimuli of 1-2 c deg -1 , the evoked responses had significantly longer latencies and reduced amplitudes through the amblyopic eye (P<0.05). Importantly, the extent of the deficit was uncorrelated with either Snellen acuity or contrast sensitivity. Localization of the evoked responses was performed using a single equivalent current dipole model. Source localizations, for both normal and amblyopic subjects, were consistent with neural activity at the occipital pole near the V1/V2 border. We conclude that MEG is sensitive to the deficit in cortical processing associated with human amblyopia, and can be used to make quantitative neurophysiological measurements. The nature of the cortical deficit is discussed. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  9. Exploratory study of once-daily transcranial direct current stimulation (tDCS) as a treatment for auditory hallucinations in schizophrenia.

    Science.gov (United States)

    Fröhlich, F; Burrello, T N; Mellin, J M; Cordle, A L; Lustenberger, C M; Gilmore, J H; Jarskog, L F

    2016-03-01

    Auditory hallucinations are resistant to pharmacotherapy in about 25% of adults with schizophrenia. Treatment with noninvasive brain stimulation would provide a welcomed additional tool for the clinical management of auditory hallucinations. A recent study found a significant reduction in auditory hallucinations in people with schizophrenia after five days of twice-daily transcranial direct current stimulation (tDCS) that simultaneously targeted left dorsolateral prefrontal cortex and left temporo-parietal cortex. We hypothesized that once-daily tDCS with stimulation electrodes over left frontal and temporo-parietal areas reduces auditory hallucinations in patients with schizophrenia. We performed a randomized, double-blind, sham-controlled study that evaluated five days of daily tDCS of the same cortical targets in 26 outpatients with schizophrenia and schizoaffective disorder with auditory hallucinations. We found a significant reduction in auditory hallucinations measured by the Auditory Hallucination Rating Scale (F2,50=12.22, PtDCS for treatment of auditory hallucinations and the pronounced response in the sham-treated group in this study contrasts with the previous finding and demonstrates the need for further optimization and evaluation of noninvasive brain stimulation strategies. In particular, higher cumulative doses and higher treatment frequencies of tDCS together with strategies to reduce placebo responses should be investigated. Additionally, consideration of more targeted stimulation to engage specific deficits in temporal organization of brain activity in patients with auditory hallucinations may be warranted. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  10. Histone Deacetylase Inhibition via RGFP966 Releases the Brakes on Sensory Cortical Plasticity and the Specificity of Memory Formation.

    Science.gov (United States)

    Bieszczad, Kasia M; Bechay, Kiro; Rusche, James R; Jacques, Vincent; Kudugunti, Shashi; Miao, Wenyan; Weinberger, Norman M; McGaugh, James L; Wood, Marcelo A

    2015-09-23

    Research over the past decade indicates a novel role for epigenetic mechanisms in memory formation. Of particular interest is chromatin modification by histone deacetylases (HDACs), which, in general, negatively regulate transcription. HDAC deletion or inhibition facilitates transcription during memory consolidation and enhances long-lasting forms of synaptic plasticity and long-term memory. A key open question remains: How does blocking HDAC activity lead to memory enhancements? To address this question, we tested whether a normal function of HDACs is to gate information processing during memory formation. We used a class I HDAC inhibitor, RGFP966 (C21H19FN4O), to test the role of HDAC inhibition for information processing in an auditory memory model of learning-induced cortical plasticity. HDAC inhibition may act beyond memory enhancement per se to instead regulate information in ways that lead to encoding more vivid sensory details into memory. Indeed, we found that RGFP966 controls memory induction for acoustic details of sound-to-reward learning. Rats treated with RGFP966 while learning to associate sound with reward had stronger memory and additional information encoded into memory for highly specific features of sounds associated with reward. Moreover, behavioral effects occurred with unusually specific plasticity in primary auditory cortex (A1). Class I HDAC inhibition appears to engage A1 plasticity that enables additional acoustic features to become encoded in memory. Thus, epigenetic mechanisms act to regulate sensory cortical plasticity, which offers an information processing mechanism for gating what and how much is encoded to produce exceptionally persistent and vivid memories. Significance statement: Here we provide evidence of an epigenetic mechanism for information processing. The study reveals that a class I HDAC inhibitor (Malvaez et al., 2013; Rumbaugh et al., 2015; RGFP966, chemical formula C21H19FN4O) alters the formation of auditory memory by

  11. Histone Deacetylase Inhibition via RGFP966 Releases the Brakes on Sensory Cortical Plasticity and the Specificity of Memory Formation

    Science.gov (United States)

    Bechay, Kiro; Rusche, James R.; Jacques, Vincent; Kudugunti, Shashi; Miao, Wenyan; Weinberger, Norman M.; McGaugh, James L.

    2015-01-01

    Research over the past decade indicates a novel role for epigenetic mechanisms in memory formation. Of particular interest is chromatin modification by histone deacetylases (HDACs), which, in general, negatively regulate transcription. HDAC deletion or inhibition facilitates transcription during memory consolidation and enhances long-lasting forms of synaptic plasticity and long-term memory. A key open question remains: How does blocking HDAC activity lead to memory enhancements? To address this question, we tested whether a normal function of HDACs is to gate information processing during memory formation. We used a class I HDAC inhibitor, RGFP966 (C21H19FN4O), to test the role of HDAC inhibition for information processing in an auditory memory model of learning-induced cortical plasticity. HDAC inhibition may act beyond memory enhancement per se to instead regulate information in ways that lead to encoding more vivid sensory details into memory. Indeed, we found that RGFP966 controls memory induction for acoustic details of sound-to-reward learning. Rats treated with RGFP966 while learning to associate sound with reward had stronger memory and additional information encoded into memory for highly specific features of sounds associated with reward. Moreover, behavioral effects occurred with unusually specific plasticity in primary auditory cortex (A1). Class I HDAC inhibition appears to engage A1 plasticity that enables additional acoustic features to become encoded in memory. Thus, epigenetic mechanisms act to regulate sensory cortical plasticity, which offers an information processing mechanism for gating what and how much is encoded to produce exceptionally persistent and vivid memories. SIGNIFICANCE STATEMENT Here we provide evidence of an epigenetic mechanism for information processing. The study reveals that a class I HDAC inhibitor (Malvaez et al., 2013; Rumbaugh et al., 2015; RGFP966, chemical formula C21H19FN4O) alters the formation of auditory memory by

  12. Multiple time scales of adaptation in auditory cortex neurons.

    Science.gov (United States)

    Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel

    2004-11-17

    Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.

  13. Combined small-molecule inhibition accelerates the derivation of functional, early-born, cortical neurons from human pluripotent stem cells

    Science.gov (United States)

    Qi, Yuchen; Zhang, Xin-Jun; Renier, Nicolas; Wu, Zhuhao; Atkin, Talia; Sun, Ziyi; Ozair, M. Zeeshan; Tchieu, Jason; Zimmer, Bastian; Fattahi, Faranak; Ganat, Yosif; Azevedo, Ricardo; Zeltner, Nadja; Brivanlou, Ali H.; Karayiorgou, Maria; Gogos, Joseph; Tomishima, Mark; Tessier-Lavigne, Marc; Shi, Song-Hai; Studer, Lorenz

    2017-01-01

    Considerable progress has been made in converting human pluripotent stem cells (hPSCs) into functional neurons. However, the protracted timing of human neuron specification and functional maturation remains a key challenge that hampers the routine application of hPSC-derived lineages in disease modeling and regenerative medicine. Using a combinatorial small-molecule screen, we previously identified conditions for the rapid differentiation of hPSCs into peripheral sensory neurons. Here we generalize the approach to central nervous system (CNS) fates by developing a small-molecule approach for accelerated induction of early-born cortical neurons. Combinatorial application of 6 pathway inhibitors induces post-mitotic cortical neurons with functional electrophysiological properties by day 16 of differentiation, in the absence of glial cell co-culture. The resulting neurons, transplanted at 8 days of differentiation into the postnatal mouse cortex, are functional and establish long-distance projections, as shown using iDISCO whole brain imaging. Accelerated differentiation into cortical neuron fates should facilitate hPSC-based strategies for disease modeling and cell therapy in CNS disorders. PMID:28112759

  14. Magnetic Source Imaging of the Human Brain Reveals a Hierarchy of Memories and Their Lifetimes

    Science.gov (United States)

    Williamson, Samuel

    1998-03-01

    The advent of large arrays of superconducting sensors makes it possible to properly characterize the evolution of the magnetic field pattern near the human scalp produced by the spatio-temporal evolution of electric currents flowing within the cerebral cortex. With this capability a variety of dynamic phenomena can be elucidated, including the relaxation phenomena following a sensory stimulus. For both visual and auditory stimuli, magnetic source imaging (MSI) provides evidence that the cortical activation traces decay exponentially and thereby establish well-defined lifetimes. These lifetimes range from about 200 ms in the primary visual cortex and 2 s in the primary auditory cortex. Moreover, higher processing stages as in the parietal and temporal areas exhibit lifetimes as long as 20 s, or more.

  15. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.

    2008-01-01

    consistent with the perceptual data obtained with the same stimuli and with results from simulations of neural activity at the output of cochlear preprocessing. These findings demonstrate that phase effects in peripheral auditory processing are accurately reflected up to the level of the auditory cortex....

  16. Cortical basis of communication: local computation, coordination, attention.

    Science.gov (United States)

    Alexandre, Frederic

    2009-03-01

    Human communication emerges from cortical processing, known to be implemented on a regular repetitive neuronal substratum. The supposed genericity of cortical processing has elicited a series of modeling works in computational neuroscience that underline the information flows driven by the cortical circuitry. In the minimalist framework underlying the current theories for the embodiment of cognition, such a generic cortical processing is exploited for the coordination of poles of representation, as is reported in this paper for the case of visual attention. Interestingly, this case emphasizes how abstract internal referents are built to conform to memory requirements. This paper proposes that these referents are the basis for communication in humans, which is firstly a coordination and an attentional procedure with regard to their congeners.

  17. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  18. Functional Imaging of Audio–Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    Science.gov (United States)

    Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.

    2017-01-01

    Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201

  19. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  20. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…