Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Plakke, Bethany; Romanski, Lizabeth M.
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
Maier, Joost X; Ghazanfar, Asif A
Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.
Brian N Pasley
Full Text Available How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
Full Text Available A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia
In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tervaniemi, Mari; Hugdahl, Kenneth
In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding.
Vibhakar C Kotak
Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.
Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.
Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.
Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B
Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Barbour, Dennis L; Wang, Xiaoqin
Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O; Roopun, Anita K; Traub, Roger D; Kopell, Nancy J; Whittington, Miles A
.... Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex...
Christopher I Petkov
Full Text Available Anatomical studies propose that the primate auditory cortex contains more fields than have actually been functionally confirmed or described. Spatially resolved functional magnetic resonance imaging (fMRI with carefully designed acoustical stimulation could be ideally suited to extend our understanding of the processing within these fields. However, after numerous experiments in humans, many auditory fields remain poorly characterized. Imaging the macaque monkey is of particular interest as these species have a richer set of anatomical and neurophysiological data to clarify the source of the imaged activity. We functionally mapped the auditory cortex of behaving and of anesthetized macaque monkeys with high resolution fMRI. By optimizing our imaging and stimulation procedures, we obtained robust activity throughout auditory cortex using tonal and band-passed noise sounds. Then, by varying the frequency content of the sounds, spatially specific activity patterns were observed over this region. As a result, the activity patterns could be assigned to many auditory cortical fields, including those whose functional properties were previously undescribed. The results provide an extensive functional tessellation of the macaque auditory cortex and suggest that 11 fields contain neurons tuned for the frequency of sounds. This study provides functional support for a model where three fields in primary auditory cortex are surrounded by eight neighboring "belt" fields in non-primary auditory cortex. The findings can now guide neurophysiological recordings in the monkey to expand our understanding of the processing within these fields. Additionally, this work will improve fMRI investigations of the human auditory cortex.
Paetau, R; Kajola, M; Korkman, M; Hämäläinen, M; Granström, M L; Hari, R
The Landau-Kleffner syndrome (LKS) is characterized by electroencephalographic spike discharges and verbal auditory agnosia in previously healthy children. We recorded magnetoencephalographic (MEG) spikes in a patient with LKS, and compared their sources with anatomical information from magnetic resonance imaging. All spikes originated close to the left auditory cortex. The evoked responses were contaminated by spikes in the left auditory area and suppressed in the right--the latter responses recovered when the spikes disappeared. We suggest that unilateral discharges at or near the auditory cortex disrupt auditory discrimination in the affected hemisphere, and lead to suppression of auditory information from the opposite hemisphere, thereby accounting for the two main criteria of LKS.
Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.
Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.
Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a
Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin
Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)
We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)
Ruytjens, Liesbet; Georgiadis, Janniko R; Holstege, Gert; Wit, Hero P; Albers, Frans W J; Willemsen, Antoon T M
We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies.
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde
Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.
Full Text Available In auditory cortex, neural responses decrease with stimulus repetition, known as adaptation. Adaptation is thought to facilitate detection of novel sounds and improve perception in noisy environments. Although it is well established that adaptation occurs in primary auditory cortex, it is not known whether adaptation also occurs in higher auditory areas involved in processing complex sounds, such as speech. Resolving this issue is important for understanding the neural bases of adaptation and to avoid potential post-operative deficits after temporal lobe surgery for treatment of focal epilepsy. Intracranial electrocorticographic recordings were acquired simultaneously from electrodes implanted in primary and association auditory areas of the right (non-dominant temporal lobe in a patient with complex partial seizures originating from the inferior parietal lobe. Simple and complex sounds were presented in a passive oddball paradigm. We measured changes in single-trial high-gamma power (70–150 Hz and in regional and inter-regional network-level activity indexed by cross-frequency coupling. Repetitive tones elicited the greatest adaptation and corresponding increases in cross-frequency coupling in primary auditory cortex. Conversely, auditory association cortex showed stronger adaptation for complex sounds, including speech. This first report of multi-regional adaptation in human auditory cortex highlights the role of the non-dominant temporal lobe in suppressing neural responses to repetitive background sounds (noise. These results underscore the clinical utility of functional mapping to avoid potential post-operative deficits including increased listening difficulties in noisy, real-world environments.
Zatorre, Robert J; Halpern, Andrea R
Most people intuitively understand what it means to "hear a tune in your head." Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.
Allman, Brian L.; Keniston, Leslie P.; Meredith, M. Alex
In response to early or developmental lesions, responsiveness of sensory cortex can be converted from the deprived modality to that of the remaining sensory systems. However, little is known about capacity of the adult cortex for cross-modal reorganization. The present study examined the auditory cortices of animals deafened as adults, and observed an extensive somatosensory conversion within as little as 16 days after deafening. These results demonstrate that cortical cross-modal reorganizat...
Song, Yu; Liu, Junxiu; Ma, Furong; Mao, Lanqun
Diazepam can reduce the excitability of lateral amygdala and eventually suppress the excitability of the auditory cortex in rats following salicylate treatment, indicating the regulating effect of lateral amygdala to the auditory cortex in the tinnitus procedure. To study the spontaneous firing rates (SFR) of the auditory cortex and lateral amygdala regulated by diazepam in the tinnitus rat model induced by sodium salicylate. This study first created a tinnitus rat modal induced by sodium salicylate, and recorded SFR of both auditory cortex and lateral amygdala. Then diazepam was intraperitoneally injected and the SFR changes of lateral amygdala recorded. Finally, diazepam was microinjected on lateral amygdala and the SFR changes of the auditory cortex recorded. Both SFRs of the auditory cortex and lateral amygdala increased after salicylate treatment. SFR of lateral amygdala decreased after intraperitoneal injection of diazepam. Microinjecting diazepam to lateral amygdala decreased SFR of the auditory cortex ipsilaterally and contralaterally.
Allman, Brian L; Keniston, Leslie P; Meredith, M Alex
In response to early or developmental lesions, responsiveness of sensory cortex can be converted from the deprived modality to that of the remaining sensory systems. However, little is known about capacity of the adult cortex for cross-modal reorganization. The present study examined the auditory cortices of animals deafened as adults, and observed an extensive somatosensory conversion within as little as 16 days after deafening. These results demonstrate that cortical cross-modal reorganization can occur after the period of sensory system maturation.
Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.
Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.
The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, and found systematic depth dependencies in responses to second-and-later noise bursts in slow (1–10 bursts/s) trains of noise bursts. At all depths, responses to noise bursts within a train usually decreased with increasing train rate; however, the rolloff with increasing train rate occurred at faster rates in more superficial layers. Moreover, in some recordings from mid-to-superficial layers, responses to noise bursts within a 3–4 bursts/s train were stronger than responses to noise bursts in slower trains. This non-monotonicity with train rate was especially pronounced in more superficial layers of the anterior auditory field, where responses to noise bursts within the context of a slow train were sometimes even stronger than responses to the noise burst at train onset. These findings may reflect depth dependence in suppression and recovery of cortical activity following a stimulus, which we suggest could arise from laminar differences in synaptic depression at feedforward and recurrent synapses. PMID:21900562
Fenoy, Albert J; Severson, Meryl A; Volkov, Igor O; Brugge, John F; Howard, Matthew A
In the course of performing electrical stimulation functional mapping (ESFM) in neurosurgery patients, we identified three subjects who experienced hearing suppression during stimulation of sites within the superior temporal gyrus (STG). One of these patients had long standing tinnitus that affected both ears. In all subjects, auditory event related potentials (ERPs) were recorded from chronically implanted intracranial electrodes and the results were used to localize auditory cortical fields within the STG. Hearing suppression sites were identified within anterior lateral Heschl's gyrus (HG) and posterior lateral STG, in what may be auditory belt and parabelt fields. Cortical stimulation suppressed hearing in both ears, which persisted beyond the period of electrical stimulation. Subjects experienced other stimulation-evoked perceptions at some of these same sites, including symptoms of vestibular activation and alteration of audio-visual speech processing. In contrast, stimulation of presumed core auditory cortex within posterior medial HG evoked sound perceptions, or in one case an increase in tinnitus intensity, that affected the contralateral ear and did not persist beyond the period of stimulation. The current results confirm a rarely reported experimental observation, and correlate the cortical sites associated with hearing suppression with physiologically identified auditory cortical fields.
Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef
Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519
Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (
Kostopoulos, Penelope; Petrides, Michael
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top–down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience. PMID:26831102
ANNA R. eFETONI
Full Text Available Growing evidence suggests that cochlear stressors as noise exposure and aging can induce homeostatic/maladaptive changes in the central auditory system from the brainstem to the cortex. Studies centered on such changes have revealed several mechanisms that operate in the context of sensory disruption after insult (noise trauma, drug- or age-related injury. The oxidative stress is central to current theories of induced sensory neural hearing loss and aging, and interventions to attenuate the hearing loss are based on antioxidant agent. The present review addresses the recent literature on the alterations in hair cells and spiral ganglion neurons due to noise-induced oxidative stress in the cochlea, as well on the impact of cochlear damage on the auditory cortex neurons. The emerging image emphasizes that noise-induced deafferentation and upward spread of cochlear damage is associated with the altered dendritic architecture of auditory pyramidal neurons. The cortical modifications may be reversed by treatment with antioxidants counteracting the cochlear redox imbalance. These findings open new therapeutic approaches to treat the functional consequences of the cortical reorganization following cochlear damage.
Weis, Tina; Brechmann, André; Puschmann, Sebastian; Thiel, Christiane M
Associative learning studies have shown that the anticipation of reward and punishment shapes the representation of sensory stimuli, which is further modulated by dopamine. Less is known about whether and how reward delivery activates sensory cortices and the role of dopamine at that time point of learning. We used an appetitive instrumental learning task in which participants had to learn that a specific class of frequency-modulated tones predicted a monetary reward following fast and correct responses in a succeeding reaction time task. These fMRI data were previously analyzed regarding the effect of reward anticipation, but here we focused on neural activity to the reward outcome relative to the reward expectation and tested whether such activation in the reward reception phase is modulated by L-DOPA. We analyzed neural responses at the time point of reward outcome under three different conditions: 1) when a reward was expected and received, 2) when a reward was expected but not received, and 3) when a reward was not expected and not received. Neural activity in auditory cortex was enhanced during feedback delivery either when an expected reward was received or when the expectation of obtaining no reward was correct. This differential neural activity in auditory cortex was only seen in subjects who learned the reward association and not under dopaminergic modulation. Our data provide evidence that auditory cortices are active at the time point of reward outcome. However, responses are not dependent on the reward itself but on whether the outcome confirmed the subject's expectations.
Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell
Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.
Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.
Basura, Gregory J; Koehler, Seth D; Shore, Susan E
Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience
Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251
Weinberger, Norman M
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
Li, Jingcheng; Liao, Xiang; Zhang, Jianxiong; Wang, Meng; Yang, Nian; Zhang, Jun; Lv, Guanghui; Li, Haohong; Lu, Jian; Ding, Ran; Li, Xingyi; Guang, Yu; Yang, Zhiqi; Qin, Han; Jin, Wenjun; Zhang, Kuan; He, Chao; Jia, Hongbo; Zeng, Shaoqun; Hu, Zhian; Nelken, Israel; Chen, Xiaowei
The ability of the brain to predict future events based on the pattern of recent sensory experience is critical for guiding animal's behavior. Neocortical circuits for ongoing processing of sensory stimuli are extensively studied, but their contributions to the anticipation of upcoming sensory stimuli remain less understood. We, therefore, used in vivo cellular imaging and fiber photometry to record mouse primary auditory cortex to elucidate its role in processing anticipated stimulation. We found neuronal ensembles in layers 2/3, 4, and 5 which were activated in relationship to anticipated sound events following rhythmic stimulation. These neuronal activities correlated with the occurrence of anticipatory motor responses in an auditory learning task. Optogenetic manipulation experiments revealed an essential role of such neuronal activities in producing the anticipatory behavior. These results strongly suggest that the neural circuits of primary sensory cortex are critical for coding predictive information and transforming it into anticipatory motor behavior. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Ressel, Volker; Pallier, Christophe; Ventura-Campos, Noelia; Díaz, Begoña; Roessler, Abeba; Ávila, César; Sebastián-Gallés, Núria
Two studies (Golestani et al., 2007; Wong et al., 2008) have reported a positive correlation between the ability to perceive foreign speech sounds and the volume of Heschl's gyrus (HG), the structure that houses the auditory cortex. More precisely, participants with larger left Heschl's gyri learned consonantal or tonal contrasts faster than those with smaller HG. These studies leave open the question of the impact of experience on HG volumes. In the current research, we investigated the effect of early language exposure on Heschl's gyrus by comparing Spanish-Catalan bilinguals who have been exposed to two languages since childhood, to a group of Spanish monolinguals matched in education, socio-economic status, and musical experience. Manual volumetric measurements of HG revealed that bilinguals have, on average, larger Heschl's gyri than monolinguals. This was corroborated, for the left Heschl's gyrus, by a voxel-based morphometry analysis showing larger gray matter volumes in bilinguals than in monolinguals. Since the bilinguals in this study were not a self-selected group, this observation provides a clear demonstration that learning a second language is a causal factor in the increased size of the auditory cortex.
Shenton Martha E
Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by
Reser, D H; Fishman, Y I; Arezzo, J C; Steinschneider, M
The functional organization of primary auditory cortex in non-primates is generally modeled as a tonotopic gradient with an orthogonal representation of independently mapped binaural interaction columns along the isofrequency contours. Little information is available regarding the validity of this model in the primate brain, despite the importance of binaural cues for sound localization and auditory scene analysis. Binaural and monaural responses of A1 to pure tone stimulation were studied using auditory evoked potentials, current source density and multiunit activity. Key findings include: (i) differential distribution of binaural responses with respect to best frequency, such that 74% of the sites exhibiting binaural summation had best frequencies below 2000 Hz; (ii) the pattern of binaural responses was variable with respect to cortical depth, with binaural summation often observed in the supragranular laminae of sites showing binaural suppression in thalamorecipient laminae; and (iii) dissociation of binaural responses between the initial and sustained action potential firing of neuronal ensembles in A1. These data support earlier findings regarding the temporal and spatial complexity of responses in A1 in the awake state, and are inconsistent with a simple orthogonal arrangement of binaural interaction columns and best frequency in A1 of the awake primate.
Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L
Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons.NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.
Hironori Kuga, M.D.
We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.
Engineer, C T; Centanni, T M; Im, K W; Borland, M S; Moreno, N A; Carraway, R S; Wilson, L G; Kilgard, M P
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. © 2014 Wiley Periodicals, Inc.
Mesgarani, Nima; David, Stephen V; Fritz, Jonathan B; Shamma, Shihab A
A controversial issue in neurolinguistics is whether basic neural auditory representations found in many animals can account for human perception of speech. This question was addressed by examining how a population of neurons in the primary auditory cortex (A1) of the naive awake ferret encodes phonemes and whether this representation could account for the human ability to discriminate them. When neural responses were characterized and ordered by spectral tuning and dynamics, perceptually significant features including formant patterns in vowels and place and manner of articulation in consonants, were readily visualized by activity in distinct neural subpopulations. Furthermore, these responses faithfully encoded the similarity between the acoustic features of these phonemes. A simple classifier trained on the neural representation was able to simulate human phoneme confusion when tested with novel exemplars. These results suggest that A1 responses are sufficiently rich to encode and discriminate phoneme classes and that humans and animals may build upon the same general acoustic representations to learn boundaries for categorical and robust sound classification.
Full Text Available Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.
Da Costa, Sandra; van der Zwaag, Wietske; Marques, Jose P; Frackowiak, Richard S J; Clarke, Stephanie; Saenz, Melissa
The primary auditory cortex (PAC) is central to human auditory abilities, yet its location in the brain remains unclear. We measured the two largest tonotopic subfields of PAC (hA1 and hR) using high-resolution functional MRI at 7 T relative to the underlying anatomy of Heschl's gyrus (HG) in 10 individual human subjects. The data reveals a clear anatomical-functional relationship that, for the first time, indicates the location of PAC across the range of common morphological variants of HG (single gyri, partial duplications, and complete duplications). In 20/20 individual hemispheres, two primary mirror-symmetric tonotopic maps were clearly observed with gradients perpendicular to HG. PAC spanned both divisions of HG in cases of partial and complete duplications (11/20 hemispheres), not only the anterior division as commonly assumed. Specifically, the central union of the two primary maps (the hA1-R border) was consistently centered on the full Heschl's structure: on the gyral crown of single HGs and within the sulcal divide of duplicated HGs. The anatomical-functional variants of PAC appear to be part of a continuum, rather than distinct subtypes. These findings significantly revise HG as a marker for human PAC and suggest that tonotopic maps may have shaped HG during human evolution. Tonotopic mappings were based on only 16 min of fMRI data acquisition, so these methods can be used as an initial mapping step in future experiments designed to probe the function of specific auditory fields.
Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.
Meredith, M. Alex; Allman, Brian L.
The recent findings in several species that primary auditory cortex processes non-auditory information have largely overlooked the possibility for somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior – AAF, and primary auditory-- A1, fields) for tactile responsivity. Multiple single-unit recordings from anesthetized ferret cortex yielded histologically verified neurons (n=311) tested with electronically controlled auditory, visual and tactile stimuli and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. PMID:25728185
Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Full Text Available Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS. Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Profant, O; Škoch, A; Balogová, Z; Tintěra, J; Hlinka, J; Syka, J
Age-related hearing loss (presbycusis) is caused mainly by the hypofunction of the inner ear, but recent findings point also toward a central component of presbycusis. We used MR morphometry and diffusion tensor imaging (DTI) with a 3T MR system with the aim to study the state of the central auditory system in a group of elderly subjects (>65years) with mild presbycusis, in a group of elderly subjects with expressed presbycusis and in young controls. Cortical reconstruction, volumetric segmentation and auditory pathway tractography were performed. Three parameters were evaluated by morphometry: the volume of the gray matter, the surface area of the gyrus and the thickness of the cortex. In all experimental groups the surface area and gray matter volume were larger on the left side in Heschl's gyrus and planum temporale and slightly larger in the gyrus frontalis superior, whereas they were larger on the right side in the primary visual cortex. Almost all of the measured parameters were significantly smaller in the elderly subjects in Heschl's gyrus, planum temporale and gyrus frontalis superior. Aging did not change the side asymmetry (laterality) of the gyri. In the central part of the auditory pathway above the inferior colliculus, a trend toward an effect of aging was present in the axial vector of the diffusion (L1) variable of DTI, with increased values observed in elderly subjects. A trend toward a decrease of L1 on the left side, which was more pronounced in the elderly groups, was observed. The effect of hearing loss was present in subjects with expressed presbycusis as a trend toward an increase of the radial vectors (L2L3) in the white matter under Heschl's gyrus. These results suggest that in addition to peripheral changes, changes in the central part of the auditory system in elderly subjects are also present; however, the extent of hearing loss does not play a significant role in the central changes. Copyright © 2013 IBRO. Published by Elsevier Ltd
Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G
Cross-modal plasticity following peripheral sensory loss enables deprived cortex to provide enhanced abilities in remaining sensory systems. These functional adaptations have been demonstrated in cat auditory cortex following early-onset deafness in electrophysiological and psychophysical studies. However, little information is available concerning any accompanying structural compensations. To examine the influence of sound experience on areal cartography, auditory cytoarchitecture was examined in hearing cats, early-deaf cats, and cats with late-onset deafness. Cats were deafened shortly after hearing onset or in adulthood. Cerebral cytoarchitecture was revealed immunohistochemically using SMI-32, a monoclonal antibody used to distinguish auditory areas in many species. Auditory areas were delineated in coronal sections and their volumes measured. Staining profiles observed in hearing cats were conserved in early- and late-deaf cats. In all deaf cats, dorsal auditory areas were the most mutable. Early-deaf cats showed further modifications, with significant expansions in second auditory cortex and ventral auditory field. Borders between dorsal auditory areas and adjacent visual and somatosensory areas were shifted ventrally, suggesting expanded visual and somatosensory cortical representation. Overall, this study shows the influence of acoustic experience in cortical development, and suggests that the age of auditory deprivation may significantly affect auditory areal cartography. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
de Lavernhe-Lemaire, M C; Robier, A
An eventual modulation of the afferent auditory message by the cortex is the subject of this study. To test this hypothesis, clicks (10 Hz, 100 microseconds) of white noise of 40 and 70 dB Hl were sent alternatively into the ears of normally hearing volunteers, while the brainstem evoked potentials were recorded. The subjects were asked to focus or relax their attention on one or other ear. Thirty subjects aged less than 25 years (15 men and 15 women) with normal hearing level, were split into two groups. The first group was asked to focus first on the more strongly stimulated ear (70 dB), the second group on the more weakly stimulated one (40 dB). Each subject received (1) without any instruction about attention: 40 dB on the left ear (L), 70 dB on the right ear (R); 40 dB then 70 dB bilateral; (2) 2 runs with 40 dB on the L and 70 dB on the R focussing on the most or less strongly stimulated ear; (3) a run without instruction with 70 dB on the L and 40 dB on the R, and (4) two runs with 70 dB on the L and 40 dB on the R focussing enough on the more or less strongly stimulated ear. On the evoked potentials simultaneously recorded, amplitudes and latencies of the pikes were measured and compared. From these experiments, the following elements were obtained. (1) The measured potentials were always caused by ipsilateral stimuli. (2) Focussing on left or right ear was not equivalent. (3) A gender difference appeared in the brainstem auditory responses. (4) Preferential attention paid to the left ear was more efficient than to the right one. (5) Attention can alter the whole nervous pathway with considerable lengthening of O-I, O-III, O-V, III-V, I-V but not I-III latencies. The III wave amplitude generally decreased on the side where attention was focussed while V waves seemed not to vary. These first results indicate that a cortico-efferent pathway stimulated by the attention plays a role in the auditory responses modifying the afferent message. These effects were
Fitzpatrick, Douglas C.; Roberts, Jason M.; Kuwada, Shigeyuki; Kim, Duck O.; Filipovic, Blagoje
Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating inter...
Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple sing...
Razak, Khaleel A.; Fuzessery, Zoltan M.
A consistent organizational feature of auditory cortex is a clustered representation of binaural properties. Here we address two questions. What is the intrinsic organization of binaural clusters and to what extent does intracortical processing contribute to binaural representation. We address these issues in the auditory cortex of the pallid bat. The pallid bat listens to prey-generated noise transients to localize and hunt terrestrial prey. As in other species studied, binaural clusters are...
Zhang, Jiping; Nakamoto, Kyle T; Kitzes, Leonard M
The binaural interactions of neurons were studied in the primary auditory cortex (AI) of barbiturate-anesthetized cats with a matrix of binaural tonal stimuli varying in both interaural level differences (ILD) and average binaural level (ABL). The purpose of this study was to determine: 1) the distribution of preferred binaural combinations (PBCs) of a large population of neurons and its relationships with binaural interactions and binaural monotonicity; 2) whether monaural responses are predictive of binaural responses; and 3) whether there is a restricted set of representative binaural stimulus configurations that could effectively classify the binaural interactions. Binaural interactions were often diverse in the matrix and dependent on both ABL and ILD. Compared with previous studies, a higher proportion of mixed binaural interaction type and a lower proportion of EO/I type were found. No monaural neurons were found. Binaural responses often differed from monaural responses in the number of spikes and/or the form of the response functions. The PBCs of the majority of EO and PB neurons were in the contralateral field and midline, respectively. However, the PBCs of EE units were evenly distributed across the contralateral and ipsilateral fields. The majority of the nonmonotonic neurons responded most strongly to lower ABLs, whereas the majority of monotonic neurons responded most strongly to higher ABLs. This study demonstrated that in AI a restricted set of binaural stimulus configurations is not sufficient to reveal the binaural responses properties. Also, monaural responses are not predictive of binaural responses.
Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R
Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Cardon, Garrett; Campbell, Julia; Sharma, Anu
The developing auditory cortex is highly plastic. As such, the cortex is both primed to mature normally and at risk for reorganizing abnormally, depending upon numerous factors that determine central maturation. From a clinical perspective, at least two major components of development can be manipulated: (1) input to the cortex and (2) the timing of cortical input. Children with sensorineural hearing loss (SNHL) and auditory neuropathy spectrum disorder (ANSD) have provided a model of early deprivation of sensory input to the cortex and demonstrated the resulting plasticity and development that can occur upon introduction of stimulation. In this article, we review several fundamental principles of cortical development and plasticity and discuss the clinical applications in children with SNHL and ANSD who receive intervention with hearing aids and/or cochlear implants. American Academy of Audiology.
Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.
Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…
Kajikawa, Yoshinao; Smiley, John F; Schroeder, Charles E
Prior studies have reported "local" field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be "contaminated" by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such "far-field" activity, in addition to, or in absence of, local synaptic responses.SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the
Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael
Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.
McLaughlin, Susan A; Higgins, Nathan C; Stecker, G Christopher
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Malone, Brian J; Beitel, Ralph E; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E
Amplitude modulations are fundamental features of natural signals, including human speech and nonhuman primate vocalizations. Because natural signals frequently occur in the context of other competing signals, we used a forward-masking paradigm to investigate how the modulation context of a prior signal affects cortical responses to subsequent modulated sounds. Psychophysical "modulation masking," in which the presentation of a modulated "masker" signal elevates the threshold for detecting the modulation of a subsequent stimulus, has been interpreted as evidence of a central modulation filterbank and modeled accordingly. Whether cortical modulation tuning is compatible with such models remains unknown. By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels. In contrast, modulation context had little effect on the synchrony of the cortical representation of the second SAM stimuli and the tuning of such effects did not match that observed for firing rate. Our results suggest that, although the temporal representation of modulated signals is more robust to changes in stimulus context than representations based on average firing rate, this representation is not fully exploited and psychophysical modulation masking more closely mirrors physiological rate suppression and that rate tuning for a given stimulus feature in a given neuron's signal pathway appears sufficient to engender context-sensitive cortical adaptation. Copyright © 2015 the authors 0270-6474/15/355904-13$15.00/0.
Dragicevic, Constantino D; Aedo, Cristian; León, Alex; Bowen, Macarena; Jara, Natalia; Terreros, Gonzalo; Robles, Luis; Delano, Paul H
In mammals, efferent projections to the cochlear receptor are constituted by olivocochlear (OC) fibers that originate in the superior olivary complex. Medial and lateral OC neurons make synapses with outer hair cells and with auditory nerve fibers, respectively. In addition to the OC system, there are also descending projections from the auditory cortex that are directed towards the thalamus, inferior colliculus, cochlear nucleus, and superior olivary complex. Olivocochlear function can be assessed by measuring a brainstem reflex mediated by auditory nerve fibers, cochlear nucleus neurons, and OC fibers. Although it is known that the OC reflex is activated by contralateral acoustic stimulation and produces a suppression of cochlear responses, the influence of cortical descending pathways in the OC reflex is largely unknown. Here, we used auditory cortex electrical microstimulation in chinchillas to study a possible cortical modulation of cochlear and auditory nerve responses to tones in the absence and presence of contralateral noise. We found that cortical microstimulation produces two different peripheral modulations: (i) changes in cochlear sensitivity evidenced by amplitude modulation of cochlear microphonics and auditory nerve compound action potentials and (ii) enhancement or suppression of the OC reflex strength as measured by auditory nerve responses, which depended on the intersubject variability of the OC reflex. Moreover, both corticofugal effects were not correlated, suggesting the presence of two functionally different efferent pathways. These results demonstrate that auditory cortex electrical microstimulation independently modulates the OC reflex strength and cochlear sensitivity.
Full Text Available Abstract Background The mammalian auditory cortex can be subdivided into various fields characterized by neurophysiological and neuroarchitectural properties and by connections with different nuclei of the thalamus. Besides the primary auditory cortex, echolocating bats have cortical fields for the processing of temporal and spectral features of the echolocation pulses. This paper reports on location, neuroarchitecture and basic functional organization of the auditory cortex of the microchiropteran bat Phyllostomus discolor (family: Phyllostomidae. Results The auditory cortical area of P. discolor is located at parieto-temporal portions of the neocortex. It covers a rostro-caudal range of about 4800 μm and a medio-lateral distance of about 7000 μm on the flattened cortical surface. The auditory cortices of ten adult P. discolor were electrophysiologically mapped in detail. Responses of 849 units (single neurons and neuronal clusters up to three neurons to pure tone stimulation were recorded extracellularly. Cortical units were characterized and classified depending on their response properties such as best frequency, auditory threshold, first spike latency, response duration, width and shape of the frequency response area and binaural interactions. Based on neurophysiological and neuroanatomical criteria, the auditory cortex of P. discolor could be subdivided into anterior and posterior ventral fields and anterior and posterior dorsal fields. The representation of response properties within the different auditory cortical fields was analyzed in detail. The two ventral fields were distinguished by their tonotopic organization with opposing frequency gradients. The dorsal cortical fields were not tonotopically organized but contained neurons that were responsive to high frequencies only. Conclusion The auditory cortex of P. discolor resembles the auditory cortex of other phyllostomid bats in size and basic functional organization. The
David L Woods
Full Text Available BACKGROUND: While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs, little is known about how processing in these fields is modulated by other acoustic features or by attention. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging (fMRI and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses
M. Alex Meredith
Full Text Available Numerous investigations of cortical crossmodal plasticity, most often in congenital or early-deaf subjects, have indicated that secondary auditory cortical areas reorganize to exhibit visual responsiveness while the core auditory regions are largely spared. However, a recent study of adult-deafened ferrets demonstrated that core auditory cortex was reorganized by the somatosensory modality. Because adult animals have matured beyond their critical period of sensory development and plasticity, it was not known if adult-deafening and early-deafening would generate the same crossmodal results. The present study used young, ototoxically-lesioned ferrets (n=3 that, after maturation (avg. = 173 days old, showed significant hearing deficits (avg. threshold = 72 dB SPL. Recordings from single-units (n=132 in core auditory cortex showed that 72% were activated by somatosensory stimulation (compared to 1% in hearing controls. In addition, tracer injection into early hearing-impaired core auditory cortex labeled essentially the same auditory cortical and thalamic projection sources as seen for injections in the hearing controls, indicating that the functional reorganization was not the result of new or latent projections to the cortex. These data, along with similar observations from adult-deafened and adult hearing-impaired animals, support the recently proposed brainstem theory for crossmodal plasticity induced by hearing loss.
de Villers-Sidani, Etienne; Merzenich, Michael M
The rodent auditory cortex has provided a particularly useful model for studying cortical plasticity phenomenology and mechanisms, both in infant and in adult animal models. Much of our initial understanding of the neurological processes underlying learning-induced changes in the cortex stems from the early exploitation of this model. More recent studies have provided a rich and elaborate demonstration of the "rules" governing representational plasticity induced during the critical period (CP) and in the longer post-CP "adult" plasticity epoch. These studies have also contributed importantly to the application of these "rules" to the development of practical training tools designed to improve the functional capacities of the auditory, language, and reading capacities of both children with developmental impairments and adults with acquired impairments in the auditory/aural speed and related cognitive domains. Using age as a connecting thread, we review recent studies performed in the rat primary auditory cortex (A1) that have provided further insight into the role of sensory experience in the shaping auditory signal representations, and into their possible role in shaping the machinery that regulates "adult" plasticity in A1. With this background, the role of auditory training in the remediation of auditory processing impairments is briefly discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Full Text Available In nonhuman primates a scheme for the organisation of the auditory cortex is frequently used to localise auditory processes. The scheme allows a common basis for comparison of functional organisation across nonhuman primate species. However, although a body of functional and structural data in nonhuman primates supports an accepted scheme of nearly a dozen neighbouring functional areas, can this scheme be directly applied to humans? Attempts to expand the scheme of auditory cortical fields in humans have been severely hampered by a recent controversy about the organisation of tonotopic maps in humans, centred on two different models with radically different organisation. We point out observations that reconcile the previous models and suggest a distinct model in which the human cortical organisation is much more like that of other primates. This unified framework allows a more robust and detailed comparison of auditory cortex organisation across primate species including humans.
Mizrahi, Adi; Shalev, Amos; Nelken, Israel
The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
Proverbio, Alice Mado; D'Aniello, Guido Edoardo; Adorni, Roberta; Zani, Alberto
As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after approximately 200 ms.
Xu, Xinxiu; Yu, Xiongjie; He, Jufang; Nelken, Israel
The ability to detect unexpected or deviant events in natural scenes is critical for survival. In the auditory system, neurons from the midbrain to cortex adapt quickly to repeated stimuli but this adaptation does not fully generalize to other rare stimuli, a phenomenon called stimulus-specific adaptation (SSA). Most studies of SSA were conducted with pure tones of different frequencies, and it is by now well-established that SSA to tone frequency is strong and robust in auditory cortex. Here we tested SSA in the auditory cortex to the ear of stimulation using broadband noise. We show that cortical neurons adapt specifically to the ear of stimulation, and that the contrast between the responses to stimulation of the same ear when rare and when common depends on the binaural interaction class of the neurons. PMID:25126058
Full Text Available The ability to detect unexpected or deviant events in natural scenes is critical for survival. In the auditory system, neurons from the midbrain to cortex adapt quickly to repeated stimuli but this adaptation does not fully generalize to other, rare stimuli, a phenomenon called stimulus-specific adaptation (SSA. Most studies of SSA were conducted with pure tones of different frequencies, and it is by now well-established that SSA to tone frequency is strong and robust in auditory cortex. Here we tested SSA in the auditory cortex to the ear of stimulation using broadband noise. We show that cortical neurons adapt specifically to the ear of stimulation, and that the contrast between the responses to stimulation of the same ear when rare and when common depends on the binaural interaction class of the neurons.
Jill B Firszt
Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type
Roberts Larry E
Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.
Paul Fredrick Sowman
Full Text Available Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS -induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.
Fitzpatrick, Douglas C; Roberts, Jason M; Kuwada, Shigeyuki; Kim, Duck O; Filipovic, Blagoje
Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating interaural phase difference, while recording from neurons in the unanesthetized rabbit. We found that the cutoff frequency for neural synchronization to the binaural beat frequency (BBF) decreased between the IC and auditory cortex, and that this decrease was associated with an increase in the group delay. These features indicate that there is an increased temporal integration window in the cortex compared to the IC, complementing that seen with monaural signals. Comparable measurements of responses to amplitude modulation showed that the monaural and binaural temporal integration windows at the cortical level were quantitatively as well as qualitatively similar, suggesting that intrinsic membrane properties and afferent synapses to the cortical neurons govern the dynamic processing. The upper limits of synchronization to the BBF and the band-pass tuning characteristics of cortical neurons are a close match to human psychophysics.
Full Text Available Evaluating series of complex sounds like those in speech and music requires sequential comparisons to extract task-relevant relations between subsequent sounds. With the present functional magnetic resonance imaging (fMRI study, we investigated whether sequential comparison of a specific acoustic feature within pairs of tones leads to a change in lateralized processing in the auditory cortex of humans. For this we used the active categorization of the direction (up versus down of slow frequency modulated (FM tones. Several studies suggest that this task is mainly processed in the right auditory cortex. These studies, however, tested only the categorization of the FM direction of each individual tone. In the present study we ask the question whether the right lateralized processing changes when, in addition, the FM direction is compared within pairs of successive tones. For this we use an experimental approach involving contralateral noise presentation in order to explore the contributions made by the left and right auditory cortex in the completion of the auditory task. This method has already been applied to confirm the right-lateralized processing of the FM direction of individual tones. In the present study, the subjects were required to perform, in addition, a sequential comparison of the FM-direction in pairs of tones. The results suggest a division of labor between the two hemispheres such that the FM direction of each individual tone is mainly processed in the right auditory cortex whereas the sequential comparison of this feature between tones in a pair is probably performed in the left auditory cortex.
Full Text Available The auditory efferent system is a complex network of descending pathways, which mainly originate in the primary auditory cortex and are directed to several auditory subcortical nuclei. These descending pathways are connected to olivocochlear neurons, which in turn make synapses with auditory nerve neurons and outer hair cells (OHC of the cochlea. The olivocochlear function can be studied using contralateral acoustic stimulation, which suppresses auditory nerve and cochlear responses. In the present work, we tested the proposal that the corticofugal effects that modulate the strength of the olivocochlear reflex on auditory nerve responses are produced through cholinergic synapses between medial olivocochlear (MOC neurons and OHCs via alpha-9/10 nicotinic receptors.We used wild type (WT and alpha-9 nicotinic receptor knock-out (KO mice, which lack cholinergic transmission between MOC neurons and OHC, to record auditory cortex evoked potentials and to evaluate the consequences of auditory cortex electrical microstimulation in the effects produced by contralateral acoustic stimulation on auditory brainstem responses (ABR.Auditory cortex evoked potentials at 15 kHz were similar in WT and KO mice. We found that auditory cortex microstimulation produces an enhancement of contralateral noise suppression of ABR waves I and III in WT mice but not in KO mice. On the other hand, corticofugal modulations of wave V amplitudes were significant in both genotypes.These findings show that the corticofugal modulation of contralateral acoustic suppressions of auditory nerve (ABR wave I and superior olivary complex (ABR wave III responses are mediated through MOC synapses.
Kyweriga, Michael; Stewart, Whitney; Cahill, Carolyn
The interaural level difference (ILD) is a sound localization cue that is extensively processed in the auditory brain stem and midbrain and is also represented in the auditory cortex. Here, we asked whether neurons in the auditory cortex passively inherit their ILD tuning from subcortical sources or whether their spiking preferences were actively shaped by local inhibition. If inherited, the ILD selectivity of spiking output should match that of excitatory synaptic input. If shaped by local inhibition, by contrast, excitation should be more broadly tuned than spiking output with inhibition suppressing spiking for nonpreferred stimuli. To distinguish between these two processing strategies, we compared spiking responses with excitation and inhibition in the same neurons across a range of ILDs and average binaural sound levels. We found that cells preferring contralateral ILDs (often called EI cells) followed the inheritance strategy. In contrast, cells that were unresponsive to monaural sounds but responded predominantly to near-zero ILDs (PB cells) instead showed evidence of the local processing strategy. These PB cells received excitatory inputs that were similar to those received by the EI cells. However, contralateral monaural sounds and ILDs >0 dB elicited strong inhibition, quenching the spiking output. These results suggest that in the rat auditory cortex, EI cells do not utilize inhibition to shape ILD sensitivity, whereas PB cells do. We conclude that an auditory cortical circuit computes sensitivity for near-zero ILDs. PMID:25185807
Curio, G; Neuloh, G; Numminen, J; Jousmäki, V; Hari, R
The voice we most often hear is our own, and proper interaction between speaking and hearing is essential for both acquisition and performance of spoken language. Disturbed audiovocal interactions have been implicated in aphasia, stuttering, and schizophrenic voice hallucinations, but paradigms for a noninvasive assessment of auditory self-monitoring of speaking and its possible dysfunctions are rare. Using magnetoencephalograpy we show here that self-uttered syllables transiently activate the speaker's auditory cortex around 100 ms after voice onset. These phasic responses were delayed by 11 ms in the speech-dominant left hemisphere relative to the right, whereas during listening to a replay of the same utterances the response latencies were symmetric. Moreover, the auditory cortices did not react to rare vowel changes interspersed randomly within a series of repetitively spoken vowels, in contrast to regular change-related responses evoked 100-200 ms after replayed rare vowels. Thus, speaking primes the human auditory cortex at a millisecond time scale, dampening and delaying reactions to self-produced "expected" sounds, more prominently in the speech-dominant hemisphere. Such motor-to-sensory priming of early auditory cortex responses during voicing constitutes one element of speech self-monitoring that could be compromised in central speech disorders.
Skipper, Jeremy I
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded...
Profant, Oliver; Burianová, Jana; Syka, Josef
Roč. 296, February (2013), s. 51-59 ISSN 0378-5955 R&D Projects: GA ČR(CZ) GAP303/12/1347; GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:68378041 Keywords : auditory cortex * fequency representation * axon terminals Subject RIV: FH - Neurology Impact factor: 2.848, year: 2013
Full Text Available A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons, varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0 of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS towards the temporal pole.
Wolak, Tomasz; Cieśla, Katarzyna; Lorens, Artur; Kochanek, Krzysztof; Lewandowska, Monika; Rusiniak, Mateusz; Pluta, Agnieszka; Wójcik, Joanna; Skarżyński, Henryk
Although the tonotopic organisation of the human primary auditory cortex (PAC) has already been studied, the question how its responses are affected in sensorineural hearing loss remains open. Twenty six patients (aged 38.1 ± 9.1 years; 12 men) with symmetrical sloping sensorineural hearing loss (SNHL) and 32 age- and gender-matched controls (NH) participated in an fMRI study using a sparse protocol. The stimuli were binaural 8s complex tones with central frequencies of 400 HzCF, 800 HzCF, 1600 HzCF, 3200 HzCF, or 6400 HzCF, presented at 80 dB(C). In NH responses to all frequency ranges were found in bilateral auditory cortices. The outcomes of a winnermap approach, showing a relative arrangement of active frequency-specific areas, was in line with the existing literature and revealed a V-shape high-frequency gradient surrounding areas that responded to low frequencies in the auditory cortex. In SNHL frequency-specific auditory cortex responses were observed only for sounds from 400 HzCF to 1600 HzCF, due to the severe or profound hearing loss in higher frequency ranges. Using a stringent statistical threshold (p < 0.05; FWE) significant differences between NH and SNHL were only revealed for mid and high-frequency sounds. At a more lenient statistical threshold (p < 0.001, FDRc), however, the size of activation induced by 400 HzCF in PAC was found statistically larger in patients with a prelingual, as compared to a postlingual onset of hearing loss. In addition, this low-frequency range was more extensively represented in the auditory cortex when outcomes obtained in all patients were contrasted with those revealed in normal hearing individuals (although statistically significant only for the secondary auditory cortex). The outcomes of the study suggest preserved patterns of large-scale tonotopic organisation in SNHL which can be further refined following auditory experience, especially when the hearing loss occurs prelingually. SNHL can induce both
Crystal T Engineer
Full Text Available Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes.
Razak, Khaleel A; Fuzessery, Zoltan M
A consistent organizational feature of auditory cortex is a clustered representation of binaural properties. Here we address two questions. What is the intrinsic organization of binaural clusters and to what extent does intracortical processing contribute to binaural representation. We address these issues in the auditory cortex of the pallid bat. The pallid bat listens to prey-generated noise transients to localize and hunt terrestrial prey. As in other species studied, binaural clusters are present in the auditory cortex of the pallid bat. One cluster contains neurons that require binaural stimulation to be maximally excited, and are commonly termed predominantly binaural (PB) neurons. These neurons do not respond to monaural stimulation of either ear but show a peaked sensitivity to interaural intensity differences (IID) centered near 0 dB IID. We show that the peak IID varies systematically within this cluster. The peak IID is also correlated with the best frequency (BF) of neurons within this cluster. In addition, the IID selectivity of PB neurons is shaped by intracortical GABAergic input. Iontophoresis of GABA(A) receptor antagonists on PB neurons converts a majority of them to binaurally inhibited (EI) neurons that respond best to sounds favoring the contralateral ear. These data indicate that the cortex does not simply inherit binaural properties from lower levels but instead sharpens them locally through intracortical inhibition. The IID selectivity of the PB cluster indicates that the pallid bat cortex contains an increased representation of the frontal space that may underlie increased localization accuracy in this region.
Razak, Khaleel A.
A consistent organizational feature of auditory cortex is a clustered representation of binaural properties. Here we address two questions. What is the intrinsic organization of binaural clusters and to what extent does intracortical processing contribute to binaural representation. We address these issues in the auditory cortex of the pallid bat. The pallid bat listens to prey-generated noise transients to localize and hunt terrestrial prey. As in other species studied, binaural clusters are present in the auditory cortex of the pallid bat. One cluster contains neurons that require binaural stimulation to be maximally excited, and are commonly termed predominantly binaural (PB) neurons. These neurons do not respond to monaural stimulation of either ear but show a peaked sensitivity to interaural intensity differences (IID) centered near 0 dB IID. We show that the peak IID varies systematically within this cluster. The peak IID is also correlated with the best frequency (BF) of neurons within this cluster. In addition, the IID selectivity of PB neurons is shaped by intracortical GABAergic input. Iontophoresis of GABAA receptor antagonists on PB neurons converts a majority of them to binaurally inhibited (EI) neurons that respond best to sounds favoring the contralateral ear. These data indicate that the cortex does not simply inherit binaural properties from lower levels but instead sharpens them locally through intracortical inhibition. The IID selectivity of the PB cluster indicates that the pallid bat cortex contains an increased representation of the frontal space that may underlie increased localization accuracy in this region. PMID:20484524
Zatorre, Robert J; Delhommeau, Karine; Zarate, Jean Mary
We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.
Robert J Zatorre
Full Text Available We tested changes in cortical functional response to auditory configural learning by training ten human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music. We measured covariation in blood oxygenation signal to increasing pitch-interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature of interest. A psychophysical staircase procedure with feedback was used for training over a two-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch-interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch-interval size, such that those who had a higher sensitivity to pitch-interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex specifically to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch-interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.
Patrik Alexander Wikman
Full Text Available The neuroanatomical pathways interconnecting auditory and motor cortices play a key role in current models of human auditory cortex (AC. Evidently, auditory-motor interaction is important in speech and music production, but the significance of these cortical pathways in other auditory processing is not well known. We investigated the general effects of motor responding on AC activations to sounds during auditory and visual tasks. During all task blocks, subjects detected targets in the designated modality, reported the relative number of targets at the end of the block, and ignored the stimuli presented in the opposite modality. In each block, they were also instructed to respond to targets either using a precision grip, power grip, or to give no overt target responses. We found that motor responding strongly modulated AC activations. First, during both visual and auditory tasks, activations in widespread regions of AC decreased when subjects made precision and power grip responses to targets. Second, activations in AC were modulated by grip type during the auditory but not during the visual task. Further, the motor effects were distinct from the strong attention-related modulations in AC. These results are consistent with the idea that operations in AC are shaped by its connections with motor cortical regions.
Full Text Available The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1 in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2 in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.
Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet
Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Kayser, Christoph; Wilson, Caroline; Safaai, Houman; Sakata, Shuzo; Panzeri, Stefano
The phase of low-frequency network activity in the auditory cortex captures changes in neural excitability, entrains to the temporal structure of natural sounds, and correlates with the perceptual performance in acoustic tasks. Although these observations suggest a causal link between network rhythms and perception, it remains unknown how precisely they affect the processes by which neural populations encode sounds. We addressed this question by analyzing neural responses in the auditory cortex of anesthetized rats using stimulus-response models. These models included a parametric dependence on the phase of local field potential rhythms in both stimulus-unrelated background activity and the stimulus-response transfer function. We found that phase-dependent models better reproduced the observed responses than static models, during both stimulation with a series of natural sounds and epochs of silence. This was attributable to two factors: (1) phase-dependent variations in background firing (most prominent for delta; 1-4 Hz); and (2) modulations of response gain that rhythmically amplify and attenuate the responses at specific phases of the rhythm (prominent for frequencies between 2 and 12 Hz). These results provide a quantitative characterization of how slow auditory cortical rhythms shape sound encoding and suggest a differential contribution of network activity at different timescales. In addition, they highlight a putative mechanism that may implement the selective amplification of appropriately timed sound tokens relative to the phase of rhythmic auditory cortex activity. Copyright © 2015 Kayser et al.
Feng, Lei; Wang, Xiaoqin
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music.
Razak, Khaleel A; Yarrow, Stuart; Brewton, Dustin
The auditory cortex is necessary for sound localization. The mechanisms that shape bicoordinate spatial representation in the auditory cortex remain unclear. Here, we addressed this issue by quantifying spatial receptive fields (SRFs) in two functionally distinct cortical regions in the pallid bat. The pallid bat uses echolocation for obstacle avoidance and listens to prey-generated noise to localize prey. Its cortex contains two segregated regions of response selectivity that serve echolocation and localization of prey-generated noise. The main aim of this study was to compare 2D SRFs between neurons in the noise-selective region (NSR) and the echolocation region [frequency-modulated sweep-selective region (FMSR)]. The data reveal the following major differences between these two regions: (1) compared with NSR neurons, SRF properties of FMSR neurons were more strongly dependent on sound level; (2) as a population, NSR neurons represent a broad region of contralateral space, while FMSR selectivity was focused near the midline at sound levels near threshold and expanded considerably with increasing sound levels; and (3) the SRF size and centroid elevation were correlated with the characteristic frequency in the NSR, but not the FMSR. These data suggest different mechanisms of sound localization for two different behaviors. Previously, we reported that azimuth is represented by predictable changes in the extent of activated cortex. The present data indicate how elevation constrains this activity pattern. These data suggest a novel model for bicoordinate spatial representation that is based on the extent of activated cortex resulting from the overlap of binaural and tonotopic maps. Unlike the visual and somatosensory systems, spatial information is not directly represented at the sensory receptor epithelium in the auditory system. Spatial locations are computed by integrating neural binaural properties and frequency-dependent pinna filtering, providing a useful model
Adam, Ruth; Noppeney, Uta
Objects in our natural environment generate signals in multiple sensory modalities. This fMRI study investigated the influence of prior task-irrelevant auditory information on visually-evoked category-selective activations in the ventral occipito-temporal cortex. Subjects categorized pictures as landmarks or animal faces, while ignoring the preceding congruent or incongruent sound. Behaviorally, subjects responded slower to incongruent than congruent stimuli. At the neural level, the lateral and medial prefrontal cortices showed increased activations for incongruent relative to congruent stimuli consistent with their role in response selection. In contrast, the parahippocampal gyri combined visual and auditory information additively: activation was greater for visual landmarks than animal faces and landmark-related sounds than animal vocalizations resulting in increased parahippocampal selectivity for congruent audiovisual landmarks. Effective connectivity analyses showed that this amplification of visual landmark-selectivity was mediated by increased negative coupling of the parahippocampal gyrus with the superior temporal sulcus for congruent stimuli. Thus, task-irrelevant auditory information influences visual object categorization at two stages. In the ventral occipito-temporal cortex auditory and visual category information are combined additively to sharpen visual category-selective responses. In the left inferior frontal sulcus, as indexed by a significant incongruency effect, visual and auditory category information are integrated interactively for response selection. Copyright 2010 Elsevier Inc. All rights reserved.
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
Herrmann, Björn; Maess, Burkhard; Hahne, Anja; Schröger, Erich; Friederici, Angela D
Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex. Copyright © 2011 Elsevier Inc. All rights reserved.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Full Text Available Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS of a 20-kHz tone and an unconditioned stimulus (US of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Basta, Dietmar; Tzschentke, Barbara; Ernst, Arne
Noise-induced effects within the inner ear have been well investigated for several years. However, this peripheral damage cannot fully explain the audiological symptoms in noise-induced hearing loss (NIHL), e.g. tinnitus, recruitment, reduced speech intelligibility, hyperacusis. There are few reports on central noise effects. Noise can induce an apoptosis of neuronal tissue within the lower auditory pathway. Higher auditory structures (e.g. medial geniculate body, auditory cortex) are characterized by metabolic changes after noise exposure. However, little is known about the microstructural changes of the higher auditory pathway after noise exposure. The present paper was therefore aimed at investigating the cell density in the medial geniculate body (MGB) and the primary auditory cortex (AI) after noise exposure. Normal hearing mice were exposed to noise (10 kHz center frequency at 115 dB SPL for 3 h) at the age of 21 days under anesthesia (Ketamin/Rompun, 10:1). After 1 week, auditory brainstem response recordings (ABR) were performed in noise exposed and normal hearing animals. After fixation, the brain was microdissected and stained (Kluever-Barrera). The cell density in the MGB subdivisions and the AI were determined by counting the cells within a grid. Noise-exposed animals showed a significant ABR threshold shift over the whole frequency range. Cell density was significantly reduced in all subdivisions of the MGB and in layers IV-VI of AI. The present findings demonstrate a significant noise-induced change of the neuronal cytoarchitecture in central key areas of auditory processing. These changes could contribute to the complex psychoacoustic symptoms after NIHL.
Full Text Available Dyslexia, attention deficit hyperactivity disorder (ADHD, and attention deficit disorder (ADD show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N=147 using neuroimaging, magnet-encephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only a clear discrimination between two subtypes of attentional disorders (ADHD and ADD, a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Chen, Xian-ming; Dou, Xiao-qing; Liang, Yong-hui; Zhang, Li-wei; Luo, Bi-qiang; Deng, Yi-hong
To study the metabolic changes of auditory cortex in patients with presbycusis by using proton magnetic resonance spectroscopy ((1)H-MRS). Ten normal hearing volunteers (youth group), 10 normal hearing of elderly (aged group) and 8 patients with presbycusis (presbycusis group) were checked with proton magnetic resonance spectroscopy. N-acetylaspartic acid (NAA), creatine (Cr), choline (Cho), γ-aminobutyric acid (GABA), glutamic acid (Glu) compound were measured. The differences between the groups were semi-quantitatively analyzed. When compared with youth group, reduced NAA/Cr, increased Cho/Cr were found in the aged group and presbycusis group (P presbycusis group and youth group (P 0.05). When compared with aged group, the metabolic changes of auditory cortex in patients with presbycusis were remarkable (P presbycusis.
Full Text Available Loss of a sensory modality can lead to functional enhancement of the remaining senses. For example, short-term visual deprivations, or dark exposure (DE, can enhance neuronal responses in the auditory cortex to sounds. These enhancements encompass increased spiking rates and frequency selectivity as well as increased spiking reliability. Although we previously demonstrated enhanced thalamocortical transmission after DE, increased synaptic strength cannot account for increased frequency selectivity or reliability. We thus investigated whether other changes in the underlying circuitry contributed to improved neuronal responses. We show that DE can lead to refinement of intra- and inter-laminar connections in the mouse auditory cortex. Moreover, we use a computational model to show that the combination of increased transmission and circuit refinement can lead to increased firing reliability. Thus cross-modal influences can alter the spectral and temporal processing of sensory stimuli by refinement of thalamocortical and intracortical circuits.
Rupp, A.; Sieroka, N.; Gutschalk, A.
, which differently affect the flat envelopes of the Schroeder-phase maskers. We examined the influence of auditory-filter phase characteristics on the neural representation in the auditory cortex by investigating cortical auditory evoked fields ( AEFs). We found that the P1m component exhibited larger...... amplitudes when a long-duration tone was presented in a repeating linearly downward sweeping ( Schroeder positive, or m(+)) masker than in a repeating linearly upward sweeping ( Schroeder negative, or m(-)) masker. We also examined the neural representation of short-duration tone pulses presented...... at different temporal positions within a single period of three maskers differing in their component phases ( m(+), m(-), and sine phase m(0)). The P1m amplitude varied with the position of the tone pulse in the masker and depended strongly on the masker waveform. The neuromagnetic results in all cases were...
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple single-peaked spectral tuning (grouped into two clusters), we observe that ∼60% of auditory populations are sensitive to multiple frequency bands. Specifically, we observe sensitivity to multiple frequency bands (1) at exactly one octave distance from each other, (2) at multiple harmonically related frequency intervals, and (3) with no apparent relationship to each other. We propose that beyond the well known cortical tonotopic organization, multipeaked spectral tuning amplifies selected combinations of frequency bands. Such selective amplification might serve to detect behaviorally relevant and complex sound features, aid in segregating auditory scenes, and explain prominent perceptual phenomena such as octave invariance.
Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko
Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus.NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.
Liisa A. Tremere
Full Text Available Sex steroid hormones influence the perceptual processing of sensory signals in vertebrates. In particular, decades of research have shown that circulating levels of estrogen correlate with hearing function. The mechanisms and sites of action supporting this sensory-neuroendocrine modulation, however, remain unknown. Here we combined a molecular cloning strategy, fluorescence in-situ hybridization and unbiased quantification methods to show that estrogen-producing and -sensitive neurons heavily populate the adult mouse primary auditory cortex (AI. We also show that auditory experience in freely-behaving animals engages estrogen-producing and -sensitive neurons in AI. These estrogen-associated networks are greatly stable, and do not quantitatively change as a result of acute episodes of sensory experience. We further demonstrate the neurochemical identity of estrogen-producing and estrogen-sensitive neurons in AI and show that these cell populations are phenotypically distinct. Our findings provide the first direct demonstration that estrogen-associated circuits are highly prevalent and engaged by sensory experience in the mouse auditory cortex, and suggest that previous correlations between estrogen levels and hearing function may be related to brain-generated hormone production. Finally, our findings suggest that estrogenic modulation may be a central component of the operational framework of central auditory networks.
Jeremy D W Greenlee
Full Text Available The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents pitch perturbations in their voice auditory feedback (speaking task. ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP and event-related band power (ERBP responses, primarily in the high gamma (70-150 Hz range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG. The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.
Koelsch, Stefan; Skouras, Stavros; Fritz, Thomas; Herrera, Perfecto; Bonhage, Corinna; Küssner, Mats B; Jacobs, Arthur M
This study investigates neural correlates of music-evoked fear and joy with fMRI. Studies on neural correlates of music-evoked fear are scant, and there are only a few studies on neural correlates of joy in general. Eighteen individuals listened to excerpts of fear-evoking, joy-evoking, as well as neutral music and rated their own emotional state in terms of valence, arousal, fear, and joy. Results show that BOLD signal intensity increased during joy, and decreased during fear (compared to the neutral condition) in bilateral auditory cortex (AC) and bilateral superficial amygdala (SF). In the right primary somatosensory cortex (area 3b) BOLD signals increased during exposure to fear-evoking music. While emotion-specific activity in AC increased with increasing duration of each trial, SF responded phasically in the beginning of the stimulus, and then SF activity declined. Psychophysiological Interaction (PPI) analysis revealed extensive emotion-specific functional connectivity of AC with insula, cingulate cortex, as well as with visual, and parietal attentional structures. These findings show that the auditory cortex functions as a central hub of an affective-attentional network that is more extensive than previously believed. PPI analyses also showed functional connectivity of SF with AC during the joy condition, taken to reflect that SF is sensitive to social signals with positive valence. During fear music, SF showed functional connectivity with visual cortex and area 7 of the superior parietal lobule, taken to reflect increased visual alertness and an involuntary shift of attention during the perception of auditory signals of danger. Copyright © 2013 Elsevier Inc. All rights reserved.
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.
Tóth, Attila; Petykó, Zoltán; Gálosi, Rita; Szabó, Imre; Karádi, Kázmér; Feldmann, Ádám; Péczely, László; Kállai, Veronika; Karádi, Zoltán; Lénárd, László
The medial prefrontal cortex (mPFC) is thought to be an essential brain region for sensorimotor gating. The exact neuronal mechanisms, however, have not been extensively investigated yet by delicate single unit recording methods Prepulse inhibition (PPI) of the startle response is a broadly used important tool to investigate the inhibitory processes of sensorimotor gating. The present study was designed to examine the neuronal mechanisms of sensorimotor gating in the mPFC in freely moving rats. In these experiments, the animals were subjected to both pulse alone and prepulse+pulse stimulations. Head acceleration and the neuronal activity of the mPFC were simultaneously recorded. To adequately measure the startle reflex, a new headstage with 3D-accelerometer was created. The duration of head acceleration was longer in pulse alone trials than in prepulse+pulse trial conditions, and the amplitude of head movements was significantly larger during the pulse alone than during the prepulse+pulse situations. Single unit activities in the mPFC were recorded by means of chronically implanted tetrodes during acoustic stimulation evoked startle response and PPI. High proportion of medial prefrontal cortical neurons responded to these stimulations by characteristic firing patterns: short duration equal and unequal excitatory, medium duration excitatory, and long duration excitatory and inhibitory responses were recorded. The present findings, first time in the literature, demonstrated the startle and PPI elicited neuronal activity changes of the mPFC, and thus, provided evidence for a key role of this limbic forebrain area in sensorimotor gating process. Copyright © 2017 Elsevier B.V. All rights reserved.
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights
Micheyl, Christophe; Steinschneider, Mitchell
Many natural sounds are periodic and consist of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0, which plays a key role in the perception of speech and music. “Pitch-selective” neurons have been identified in non-primary auditory cortex of marmoset monkeys. Noninvasive studies point to a putative “pitch center” located in a homologous cortical region in humans. It remains unclear whether there is sufficient spectral and temporal information available at the level of primary auditory cortex (A1) to enable reliable pitch extraction in non-primary auditory cortex. Here we evaluated multiunit responses to HCTs in A1 of awake macaques using a stimulus design employed in auditory nerve studies of pitch encoding. The F0 of the HCTs was varied in small increments, such that harmonics of the HCTs fell either on the peak or on the sides of the neuronal pure tone tuning functions. Resultant response-amplitude-versus-harmonic-number functions (“rate-place profiles”) displayed a periodic pattern reflecting the neuronal representation of individual HCT harmonics. Consistent with psychoacoustic findings in humans, lower harmonics were better resolved in rate-place profiles than higher harmonics. Lower F0s were also temporally represented by neuronal phase-locking to the periodic waveform of the HCTs. Findings indicate that population responses in A1 contain sufficient spectral and temporal information for extracting the pitch of HCTs by neurons in downstream cortical areas that receive their input from A1. PMID:23785145
Sweet, Robert A; Pierri, Joseph N; Auh, Sungyoung; Sampson, Allan R; Lewis, David A
Subjects with schizophrenia have decreased gray matter volume of auditory association cortex in structural imaging studies, and exhibit deficits in auditory sensory memory processes subserved by this region. In dorsal prefrontal cortex (dPFC), similar in vivo observations of reduced regional volume and working memory deficits in subjects with schizophrenia have been related to reduced somal volume of deep layer 3 pyramidal cells. We hypothesized that deep layer 3 pyramidal cell somal volume would also be reduced in auditory association cortex (BA42) in schizophrenia. We used the nucleator to estimate the somal volume of pyramidal neurons in deep layer 3 of BA42 in 18 subjects with schizophrenia, each of whom was matched to one normal comparison subject for gender, age, and post-mortem interval. For all subject pairs, somal volume of pyramidal neurons in deep layer 3 of dPFC (BA9) had previously been determined. In BA42, somal volume was reduced by 13.1% in schizophrenic subjects (p=0.03). Reductions in somal volume were not associated with the history of antipsychotic use, alcohol dependence, schizoaffective disorder, or death by suicide. The percent change in somal volume within-subject pairs was highly correlated between BA42 and BA9 (r=0.67, p=0.002). Deep layer 3 pyramidal cell somal volume is reduced in BA42 of subjects with schizophrenia. This reduction may contribute to impairment in auditory function. The correlated reductions of somal volume in BA42 and BA9 suggest that a common factor may affect deep layer 3 pyramidal cells in both regions.
Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.
Engell, Alva; Junghöfer, Markus; Stein, Alwina; Lau, Pia; Wunderlich, Robert; Wollbrink, Andreas; Pantev, Christo
Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.
Full Text Available Nowadays, many people use portable players to enrich their daily life with enjoyable music. However, in noisy environments, the player volume is often set to extremely high levels in order to drown out the intense ambient noise and satisfy the appetite for music. Extensive and inappropriate usage of portable music players might cause subtle damages in the auditory system, which are not behaviorally detectable in an early stage of the hearing impairment progress. Here, by means of magnetoencephalography, we objectively examined detrimental effects of portable music player misusage on the population-level frequency tuning in the human auditory cortex. We compared two groups of young people: one group had listened to music with portable music players intensively for a long period of time, while the other group had not. Both groups performed equally and normally in standard audiological examinations (pure tone audiogram, speech test, and hearing-in-noise test. However, the objective magnetoencephalographic data demonstrated that the population-level frequency tuning in the auditory cortex of the portable music player users was significantly broadened compared to the non-users, when attention was distracted from the auditory modality; this group difference vanished when attention was directed to the auditory modality. Our conclusion is that extensive and inadequate usage of portable music players could cause subtle damages, which standard behavioral audiometric measures fail to detect in an early stage. However, these damages could lead to future irreversible hearing disorders, which would have a huge negative impact on the quality of life of those affected, and the society as a whole.
Full Text Available In the auditory pathway, the inferior colliculus (IC receives and integrates excitatory and inhibitory inputs from the lower auditory nuclei, contralateral IC, and auditory cortex (AC, and then uploads these inputs to the thalamus and cortex. Meanwhile, the AC modulates the sound signal processing of IC neurons, including their latency (i.e., first-spike latency. Excitatory and inhibitory corticofugal projections to the IC may shorten and prolong the latency of IC neurons, respectively. However, the synaptic mechanisms underlying the corticofugal latency modulation of IC neurons remain unclear. Thus, this study probed these mechanisms via in vivo intracellular recording and acoustic and focal electric stimulation. The AC latency modulation of IC neurons is possibly mediated by pre-spike depolarization duration, pre-spike hyperpolarization duration, and spike onset time. This study suggests an effective strategy for the timing sequence determination of auditory information uploaded to the thalamus and cortex.
Kirill Vadimovich Nourski
Full Text Available Current models of cortical speech and language processing include multiple regions within the temporal lobe of both hemispheres. Human communication, by necessity, involves complex interactions between regions subserving speech and language processing with those involved in more general cognitive functions. To assess these interactions, we utilized an ecologically salient conversation-based approach. This approach mandates that we first clarify activity patterns at the earliest stages of cortical speech processing. Therefore, we examined high gamma (70-150 Hz responses within the electrocorticogram (ECoG recorded simultaneously from Heschl’s gyrus (HG and lateral surface of the superior temporal gyrus (STG. Subjects were neurosurgical patients undergoing evaluation for treatment of medically intractable epilepsy. They performed an expanded version of the Mini-mental state examination (MMSE, which included additional spelling, naming, and memory-based tasks. ECoG was recorded from HG and the STG using multicontact depth and subdural electrode arrays, respectively. Differences in high gamma activity during listening to the interviewer and the subject's self-generated verbal responses were quantified for each recording site and across sites within HG and STG. The expanded MMSE produced widespread activation in auditory cortex of both hemispheres. No significant difference was found between activity during listening to the interviewer's questions and the subject's answers in posteromedial HG (auditory core cortex. A different pattern was observed throughout anterolateral HG and posterior and middle portions of lateral STG (non-core auditory cortical areas, where activity was significantly greater during listening compared to speaking. No systematic task-specific differences in the degree of suppression during speaking relative to listening were found in posterior and middle STG. Individual sites could, however, exhibit task-related variability in
Profant, Oliver; Balogová, Zuzana; Dezortová, Monika; Wagnerová, Dita; Hájek, Milan; Syka, Josef
In humans, aging is accompanied by the deterioration of the hearing function--presbycusis. The major etiology for presbycusis is the loss of hair cells in the inner ear; less well known are changes in the central auditory system. Therefore, we used 1H magnetic resonance spectroscopy at 3T tomograph to examine metabolite levels in the auditory cortex of three groups of subjects: young healthy subjects less than 30 years old and subjects older than 65 years either with mild presbycusis corresponding to their age or with expressed presbycusis. Hearing function in all subjects was examined by pure tone audiometry (125-16,000 Hz). Significant differences were found in the concentrations of glutamate and N-acetylaspartate, with lower levels in aged subjects. Lactate was particularly increased in subjects with expressed presbycusis. Significant differences were not found in other metabolites, including GABA, between young and elderly subjects. The results demonstrate that the age-related changes of the inner ear are accompanied by a decrease in the excitatory neurotransmitter glutamate as well as a lactate increase in the auditory cortex that is more expressed in elderly subjects with large hearing threshold shifts. Copyright © 2013 Elsevier Inc. All rights reserved.
Cohen, Lior; Mizrahi, Adi
Maternal behavior can be triggered by auditory and olfactory cues originating from the newborn. Here we report how the transition to motherhood affects excitatory and inhibitory neurons in layer 2/3 (L2/3) of the mouse primary auditory cortex. We used in vivo two-photon targeted cell-attached recording to compare the response properties of parvalbumin-expressing neurons (PVNs) and pyramidal glutamatergic neurons (PyrNs). The transition to motherhood shifts the average best frequency of PVNs to higher frequency by a full octave, with no significant effect on average best frequency of PyrNs. The presence of pup odors significantly reduced the spontaneous and evoked activity of PVN. This reduction of feedforward inhibition coincides with a complimentary increase in spontaneous and evoked activity of PyrNs. The selective shift of PVN frequency tuning should render pup odor-induced disinhibition more effective for high-frequency stimuli, such as ultrasonic vocalizations. Indeed, pup odors increased neuronal responses of PyrNs to pup ultrasonic vocalizations. We conclude that plasticity in the mothers is mediated, at least in part, via modulation of the feedforward inhibition circuitry in the auditory cortex. Copyright © 2015 the authors 0270-6474/15/351806-10$15.00/0.
Steinschneider, Mitchell; Micheyl, Christophe
The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate “auditory objects” with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the “object-related negativity” recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch. PMID:25209282
Qin, Pengmin; Duncan, Niall W.; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J.; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J.; Northoff, Georg
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABAA receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [18F]Flumazenil PET to measure GABAA receptor binding potentials. It was demonstrated that the local-to-global ratio of GABAA receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABAA receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABAA receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity. PMID:23293594
Full Text Available Recent imaging studies have demonstrated that levels of resting GABA in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABAA receptors, in the changes in brain activity between the eyes closed (EC and eyes open (EO state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: An EO and EC block design, allowing the modelling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [18F]Flumazenil PET measure GABAA receptor binding potentials. It was demonstrated that the local-to-global ratio of GABAA receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABAA receptor binding potential in the visual cortex also predicts the change of functional connectivity between visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABAA receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.
Higgins, Nathan C; McLaughlin, Susan A; Da Costa, Sandra; Stecker, G Christopher
Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI) to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD) applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical) and onset-only (illusory) lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset-which alters the physical but not the perceptual nature of the spatial cue-did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.
Nathan C. Higgins
Full Text Available Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical and onset-only (illusory lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.
Micheyl, Christophe; Steinschneider, Mitchell
Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198
Bezgin, Gleb; Rybacki, Konrad; van Opstal, A John; Bakker, Rembrandt; Shen, Kelly; Vakorin, Vasily A; McIntosh, Anthony R; Kötter, Rolf
Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception. Copyright © 2014 Elsevier Inc. All rights reserved.
Full Text Available An experienced car mechanic can often deduce what's wrong with a car by carefully listening to the sound of the ailing engine, despite the presence of multiple sources of noise. Indeed, the ability to select task-relevant sounds for awareness, whilst ignoring irrelevant ones, constitutes one of the most fundamental of human faculties, but the underlying neural mechanisms have remained elusive. While most of the literature explains the neural basis of selective attention by means of an increase in neural gain, a number of papers propose enhancement in neural selectivity as an alternative or a complementary mechanism.Here, to address the question whether pure gain increase alone can explain auditory selective attention in humans, we quantified the auditory cortex frequency selectivity in 20 healthy subjects by masking 1000-Hz tones by continuous noise masker with parametrically varying frequency notches around the tone frequency (i.e., a notched-noise masker. The task of the subjects was, in different conditions, to selectively attend to either occasionally occurring slight increments in tone frequency (1020 Hz, tones of slightly longer duration, or ignore the sounds. In line with previous studies, in the ignore condition, the global field power (GFP of event-related brain responses at 100 ms from the stimulus onset to the 1000-Hz tones was suppressed as a function of the narrowing of the notch width. During the selective attention conditions, the suppressant effect of the noise notch width on GFP was decreased, but as a function significantly different from a multiplicative one expected on the basis of simple gain model of selective attention.Our results suggest that auditory selective attention in humans cannot be explained by a gain model, where only the neural activity level is increased, but rather that selective attention additionally enhances auditory cortex frequency selectivity.
Full Text Available Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS delivered in the primary and secondary auditory fields (A1 and A2, respectively. After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics.
Full Text Available Stimulus-specific adaptation (SSA is the specific decrease in the response to a frequent ('standard' stimulus, which does not generalize, or generalizes only partially, to another, rare stimulus ('deviant'. Stimulus-specific adaptation could result simply from the depression of the responses to the standard. Alternatively, there may be an increase in the responses to the deviant stimulus due to the violation of expectations set by the standard, indicating the presence of true deviance detection. We studied SSA in the auditory cortex of halothane-anesthetized rats, recording local field potentials and multi-unit activity. We tested the responses to pure tones of one frequency when embedded in sequences that differed from each other in the frequency and probability of the tones composing them. The responses to tones of the same frequency were larger when deviant than when standard, even with inter-stimulus time intervals of almost 2 seconds. Thus, SSA is present and strong in rat auditory cortex. SSA was present even when the frequency difference between deviants and standards was as small as 10%, substantially smaller than the typical width of cortical tuning curves, revealing hyper-resolution in frequency. Strong responses were evoked also by a rare tone presented by itself, and by rare tones presented as part of a sequence of many widely spaced frequencies. On the other hand, when presented within a sequence of narrowly spaced frequencies, the responses to a tone, even when rare, were smaller. A model of SSA that included only adaptation of the responses in narrow frequency channels predicted responses to the deviants that were substantially smaller than the observed ones. Thus, the response to a deviant is at least partially due to the change it represents relative to the regularity set by the standard tone, indicating the presence of true deviance detection in rat auditory cortex.
Krause, Bryan M.; Raz, Aeyal; Uhlrich, Daniel J.; Smith, Philip H.; Banks, Matthew I.
The state of the sensory cortical network can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce “packets” of spikes and are associated with synchronized synaptic input (Bathellier et al., 2012; Hromadka et al., 2013; Luczak et al., 2013). However, traditional models based on data from visual and somatosensory cortex predict that ascending sensory thalamocortical (TC) pathways sequentially activate cells in layers 4 (L4), L2/3, and L5. The relationship between these two spatio-temporal activity patterns is unclear. Here, we used calcium imaging and electrophysiological recordings in murine auditory TC brain slices to investigate the laminar response pattern to stimulation of TC afferents. We show that although monosynaptically driven spiking in response to TC afferents occurs, the vast majority of spikes fired following TC stimulation occurs during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Specifically, monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2–6, presumably via synapses onto dendritic processes located in L3 and L4. However, monosynaptic spiking was rare, and occurred primarily in L4 and L5 non-pyramidal cells. By contrast, during brief, TC-induced UP states, spiking was dense and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, spike latencies were comparable between infragranular and supragranular cells. These data are consistent with a model in which activation of auditory cortex, especially supragranular layers, depends on internally generated network events that represent a non-linear amplification process, are initiated by infragranular cells and tightly regulated by feed-forward inhibitory
Luo, Huan; Liu, Zuxiang; Poeppel, David
Integrating information across sensory domains to construct a unified representation of multi-sensory signals is a fundamental characteristic of perception in ecological contexts. One provocative hypothesis deriving from neurophysiology suggests that there exists early and direct cross-modal phase modulation. We provide evidence, based on magnetoencephalography (MEG) recordings from participants viewing audiovisual movies, that low-frequency neuronal information lies at the basis of the synergistic coordination of information across auditory and visual streams. In particular, the phase of the 2–7 Hz delta and theta band responses carries robust (in single trials) and usable information (for parsing the temporal structure) about stimulus dynamics in both sensory modalities concurrently. These experiments are the first to show in humans that a particular cortical mechanism, delta-theta phase modulation across early sensory areas, plays an important “active” role in continuously tracking naturalistic audio-visual streams, carrying dynamic multi-sensory information, and reflecting cross-sensory interaction in real time. PMID:20711473
Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.
The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory
Mears, R P; Klein, A C; Cromwell, H C
Medial prefrontal cortex is a crucial region involved in inhibitory processes. Damage to the medial prefrontal cortex can lead to loss of normal inhibitory control over motor, sensory, emotional and cognitive functions. The goal of the present study was to examine the basic properties of inhibitory gating in this brain region in rats. Inhibitory gating has recently been proposed as a neurophysiological assay for sensory filters in higher brain regions that potentially enable or disable information throughput. This perspective has important clinical relevance due to the findings that gating is dramatically impaired in individuals with emotional and cognitive impairments (i.e. schizophrenia). We used the standard inhibitory gating two-tone paradigm with a 500 ms interval between tones and a 10 s interval between tone pairs. We recorded both single unit and local field potentials from chronic microwire arrays implanted in the medial prefrontal cortex. We investigated short-term (within session) and long-term (between session) variability of auditory gating and additionally examined how altering the interval between the tones influenced the potency of the inhibition. The local field potentials displayed greater variability with a reduction in the amplitudes of the tone responses over both the short and long-term time windows. The decrease across sessions was most intense for the second tone response (test tone) leading to a more robust gating (lower T/C ratio). Surprisingly, single unit responses of different varieties retained similar levels of auditory responsiveness and inhibition in both the short and long-term analysis. Neural inhibition decreased monotonically related to the increase in intertone interval. This change in gating was most consistent in the local field potentials. Subsets of single unit responses did not show the lack of inhibition even for the longer intertone intervals tested (4 s interval). These findings support the idea that the medial
Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M
The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that
Full Text Available Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 minutes. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 seconds. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys’ auditory memory performance. It is possible, therefore, that the anatomical pathways differ. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC. We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG, and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY and anterograde (10% BDA 10,000 MW tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex, and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.
Andermann, Martin; Patterson, Roy D; Vogt, Carolin; Winterstetter, Lisa; Rupp, André
Vowel recognition is largely immune to differences in speaker size despite the waveform differences associated with variation in speaker size. This has led to the suggestion that voice pitch and mean formant frequency (MFF) are extracted early in the hierarchy of hearing/speech processing and used to normalize the internal representation of vowel sounds. This paper presents a magnetoencephalographic (MEG) experiment designed to locate and compare neuromagnetic activity associated with voice pitch, MFF and vowel type in human auditory cortex. Sequences of six sustained vowels were used to contrast changes in the three components of vowel perception, and MEG responses to the changes were recorded from 25 participants. A staged procedure was employed to fit the MEG data with a source model having one bilateral pair of dipoles for each component of vowel perception. This dipole model showed that the activity associated with the three perceptual changes was functionally separable; the pitch source was located in Heschl's gyrus (bilaterally), while the vowel-type and formant-frequency sources were located (bilaterally) just behind Heschl's gyrus in planum temporale. The results confirm that vowel normalization begins in auditory cortex at an early point in the hierarchy of speech processing. Copyright © 2017 Elsevier Inc. All rights reserved.
Nomura, Hiroshi; Hara, Kojiro; Abe, Reimi; Hitora-Imamura, Natsuko; Nakayama, Ryota; Sasaki, Takuya; Matsuki, Norio; Ikegaya, Yuji
Sensory stimuli not only activate specific populations of cortical neurons but can also silence other populations. However, it remains unclear whether neuronal silencing per se leads to memory formation and behavioral expression. Here we show that mice can report optogenetic inactivation of auditory neuron ensembles by exhibiting fear responses or seeking a reward. Mice receiving pairings of footshock and silencing of a neuronal ensemble exhibited a fear response selectively to the subsequent silencing of the same ensemble. The valence of the neuronal silencing was preserved for at least 30 d and was susceptible to extinction training. When we silenced an ensemble in one side of auditory cortex for conditioning, silencing of an ensemble in another side induced no fear response. We also found that mice can find a reward based on the presence or absence of the silencing. Neuronal silencing was stored as working memory. Taken together, we propose that neuronal silencing without explicit activation in the cerebral cortex is enough to elicit a cognitive behavior.
Tomoyo Isoguchi Shiramatsu
Full Text Available Cortical information processing of the onset, offset, and continuous plateau of an acoustic stimulus should play an important role in acoustic object perception. To date, transient activities responding to the onset and offset of a sound have been well investigated and cortical subfields and topographic representation in these subfields, such as place code of sound frequency, have been well characterized. However, whether these cortical subfields with tonotopic representation are inherited in the sustained activities that follow transient activities and persist during the presentation of a long-lasting stimulus remains unknown, because sustained activities do not exhibit distinct, reproducible, and time-locked responses in their amplitude to be characterized by grand averaging. To address this gap in understanding, we attempted to decode sound information from densely mapped sustained activities in the rat auditory cortex using a sparse parameter estimation method called sparse logistic regression (SLR, and investigated whether and how these activities represent sound information. A microelectrode array with a grid of 10 × 10 recording sites within an area of 4.0 × 4.0 mm2 was implanted in the fourth layer of the auditory cortex in rats under isoflurane anesthesia. Sustained activities in response to long-lasting constant pure tones were recorded. SLR then was applied to discriminate the sound-induced band-specific power or phase-locking value from those of spontaneous activities. The highest decoding performance was achieved in the high-gamma band, indicating that cortical inhibitory interneurons may contribute to the sparse tonotopic representation in sustained activities by mediating synchronous activities. The estimated parameter in the SLR decoding revealed that the informative recording site had a characteristic frequency close to the test frequency. In addition, decoding of the four test frequencies demonstrated that the decoding
Carrasco, Andres; Lomber, Stephen G
Sensory information is encoded by cortical neurons in the form of synaptic discharge time and rate level. These neuronal codes generate response patterns across cell assemblies that are crucial to various cognitive functions. Despite pivotal information about structural and cognitive factors involved in the generation of synchronous neuronal responses such as stimulus context, attention, age, cortical depth, sensory experience, and receptive field properties, the influence of cortico-cortical connectivity on the emergence of neuronal response patterns is poorly understood. The present investigation assesses the role of cortico-cortical connectivity in the modulation of neuronal discharge synchrony across auditory cortex cell-assemblies. Acute single-unit recording techniques in combination with reversible cooling deactivation procedures were used in the domestic cat (Felis catus). Recording electrodes were positioned across primary and non-primary auditory fields and neuronal activity was measured before, during, and after synaptic deactivation of adjacent cortical regions in the presence of acoustic stimulation. Cross-correlation functions of simultaneously recorded units were generated and changes in response synchrony levels across cooling conditions were measured. Data analyses revealed significant decreases in response time coincidences between cortical neurons during periods of cortical deactivation. Collectively, the results of the present investigation demonstrate that cortical neurons participate in the modulation of response synchrony levels across neuronal assemblies of primary and non-primary auditory fields. Copyright © 2013 Elsevier B.V. All rights reserved.
Full Text Available The aim of this article is to present a systematic review about the anatomy, function, connectivity, and functional activation of the primary auditory cortex (PAC (Brodmann areas 41/42 when involved in language paradigms. PAC activates with a plethora of diverse basic stimuli including but not limited to tones, chords, natural sounds, consonants, and speech. Nonetheless, the PAC shows specific sensitivity to speech. Damage in the PAC is associated with so-called “pure word-deafness” (“auditory verbal agnosia”. BA41, and to a lesser extent BA42, are involved in early stages of phonological processing (phoneme recognition. Phonological processing may take place in either the right or left side, but customarily the left exerts an inhibitory tone over the right, gaining dominance in function. BA41/42 are primary auditory cortices harboring complex phoneme perception functions with asymmetrical expression, making it possible to include them as core language processing areas (Wernicke’s area.
Campbell, Robert A A; Schnupp, Jan W H; Shial, Akhil; King, Andrew J
Many previous studies have subdivided auditory neurons into a number of physiological classes according to various criteria applied to their binaural response properties. However, it is often unclear whether such classifications represent discrete classes of neurons or whether they merely reflect a potentially convenient but ultimately arbitrary partitioning of a continuous underlying distribution of response properties. In this study we recorded the binaural response properties of 310 units in the auditory cortex of anesthetized ferrets, using an extensive range of interaural level differences (ILDs) and average binaural levels (ABLs). Most recordings were from primary auditory fields on the middle ectosylvian gyrus and from neurons with characteristic frequencies >5 kHz. We used simple multivariate statistics to quantify a fundamental coding feature: the shapes of the binaural response functions. The shapes of all 310 binaural response surfaces were represented as points in a five-dimensional principal component space. This space captured the underlying shape of all the binaural response surfaces. The distribution of binaural level functions was not homogeneous because some shapes were more common than others. Despite this, clustering validation techniques revealed no evidence for the existence of discrete, or partially overlapping, clusters that could serve as a basis for an objective classification of binaural-level functions. We also examined the gradients of the response functions for the population of units; these gradients were greatest near the midline, which is consistent with free-field data showing that cortical neurons are most sensitive to changes in stimulus location in this region of space.
Schwartz, Zachary P; David, Stephen V
Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1. © The Author 2017. Published by Oxford University Press.
Engineer, Crystal T; Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P
Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available Tinnitus is the perception of a sound in the absence of an external acoustic source, which often exerts a significant impact on the quality of life. Currently there is evidence that neuroplastic changes in both neural pathways are involved in the generation and maintaining of tinnitus. Neuromodulation has been suggested to interfere with these neuroplastic alterations. In this study we aimed to compare the effect of two upcoming forms of transcranial electrical neuromodulation: alternating current stimulation (tACS and random noise stimulation (tRNS, both applied on the auditory cortex. A database with 228 patients with chronic tinnitus who underwent noninvasive neuromodulation was retrospectively analyzed. The results of this study show that a single session of tRNS induces a significant suppressive effect on tinnitus loudness and distress, in contrast to tACS. Multiple sessions of tRNS augment the suppressive effect on tinnitus loudness but have no effect on tinnitus distress. In conclusion this preliminary study shows a possibly beneficial effect of tRNS on tinnitus and can be a motivation for future randomized placebo-controlled clinical studies with auditory tRNS for tinnitus. Auditory alpha-modulated tACS does not seem to be contributing to the treatment of tinnitus.
Arne Freerk Meyer
Full Text Available Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF estimates with characteristic temporal resolution 5 s to 30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
Full Text Available Harmonic sounds, such as voiced speech sounds and many animal communication signals, are characterized by a pitch related to the periodicity of their envelopes. While frequency information is extracted by mechanical filtering of the cochlea, periodicity information is analyzed by temporal filter mechanisms in the brainstem. In the mammalian auditory midbrain envelope periodicity is represented in maps orthogonal to the representation of sound frequency. However, how periodicity is represented across the cortical surface of primary auditory cortex remains controversial. Using optical recording of intrinsic signals, we here demonstrate that a periodicity map exists in primary auditory cortex (AI of the cat. While pure tone stimulation confirmed the well-known frequency gradient along the rostro-caudal axis of AI, stimulation with harmonic sounds revealed segregated bands of activation, indicating spatially localized preferences to specific periodicities along a dorso-ventral axis, nearly orthogonal to the tonotopic gradient. Analysis of the response locations revealed an average gradient of -100° ± 10° for the periodotopic, and –12°±18° for the tonotopic map resulting in a mean angle difference of 88°. The gradients were 0.65±0.08 mm/octave for periodotopy and 1.07 ± 0.16 mm/octave for tonotopy indicating that more cortical territory is devoted to the representation of an octave along the tonotopic than along the periodotopic gradient. Our results suggest that the fundamental importance of pitch, as evident in human perception, is also reflected in the layout of cortical maps and that the orthogonal spatial organization of frequency and periodicity might be a more general cortical organization principle.
Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.
Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354
Lee, Chen-Chung; Middlebrooks, John C
Cortical deactivation studies in cats have implicated the primary auditory cortex (A1), the dorsal zone (DZ), and the posterior auditory field (PAF) in sound localization behavior, and physiological studies in anesthetized conditions have demonstrated clear differences in spatial sensitivity among those areas. We trained cats to perform two listening tasks and then we recorded from cortical neurons in off-task and in both on-task conditions during single recording sessions. The results confirmed some of the results from anesthetized conditions and revealed unexpected differences. Neurons in each field showed a variety of firing patterns, including onset-only, complex onset and long latency, and suppression or offset. A substantial minority of units showed sharpening of spatial sensitivity, particularly that of onset responses, during task performance: 44 %, 35 %, and 31 % of units in areas A1, DZ, and PAF, respectively, showed significant spatial sharpening. Field DZ was distinguished by a larger percentage of neurons responding best to near-midline locations, whereas the spatial preferences of PAF neurons were distributed more uniformly throughout the contralateral hemifield. Those directional biases also were evident in measures of the accuracy with which neural spike patterns could signal sound locations. Field DZ provided the greatest accuracy for midline locations. The location dependence of accuracy in PAF was orthogonal to that of DZ, with the greatest accuracy for lateral locations. The results suggest a view of spatial representation in the auditory cortex in which DZ exhibits an overrepresentation of the frontal areas around the midline, whereas PAF provides a more uniform representation of contralateral space, including areas behind the head. Spatial preferences of area A1 neurons were intermediate between those of DZ and PAF, sharpening as needed for localization tasks.
Razak, Khaleel A
Different fields of the auditory cortex can be distinguished by the extent and level tolerance of spatial selectivity. The mechanisms underlying the range of spatial tuning properties observed across cortical fields are unclear. Here, this issue was addressed in the pallid bat because its auditory cortex contains two segregated regions of response selectivity that serve two different behaviors: echolocation for obstacle avoidance and localization of prey-generated noise. This provides the unique opportunity to examine mechanisms of spatial properties in two functionally distinct regions. Previous studies have shown that spatial selectivity of neurons in the region selective for noise (noise-selective region, NSR) is level tolerant and shaped by interaural level difference (ILD) selectivity. In contrast, spatial selectivity of neurons in the echolocation region ('FM sweep-selective region' or FMSR) is strongly level dependent with many neurons responding to multiple distinct spatial locations for louder sounds. To determine the mechanisms underlying such level dependence, frequency, azimuth, rate-level responses and ILD selectivity were measured from the same FMSR neurons. The majority (∼75%) of FMSR neurons were monaural (ILD insensitive). Azimuth tuning curves expanded or split into multiple peaks with increasing sound level in a manner that was predicted by the rate-level response of neurons. These data suggest that azimuth selectivity of FMSR neurons depends more on monaural ear directionality and rate-level responses. The pallid bat cortex utilizes segregated monaural and binaural regions to process echoes and prey-generated noise. Together the pallid bat FMSR/NSR data provide mechanistic explanations for a broad range of spatial tuning properties seen across species. Copyright © 2016 Elsevier B.V. All rights reserved.
Moerel, Michelle; De Martino, Federico; Formisano, Elia
Auditory cortical processing of complex meaningful sounds entails the transformation of sensory (tonotopic) representations of incoming acoustic waveforms into higher-level sound representations (e.g., their category). However, the precise neural mechanisms enabling such transformations remain largely unknown. In the present study, we use functional magnetic resonance imaging (fMRI) and natural sounds stimulation to examine these two levels of sound representation (and their relation) in the human auditory cortex. In a first experiment, we derive cortical maps of frequency preference (tonotopy) and selectivity (tuning width) by mathematical modeling of fMRI responses to natural sounds. The tuning width maps highlight a region of narrow tuning that follows the main axis of Heschl's gyrus and is flanked by regions of broader tuning. The narrowly tuned portion on Heschl's gyrus contains two mirror-symmetric frequency gradients, presumably defining two distinct primary auditory areas. In addition, our analysis indicates that spectral preference and selectivity (and their topographical organization) extend well beyond the primary regions and also cover higher-order and category-selective auditory regions. In particular, regions with preferential responses to human voice and speech occupy the low-frequency portions of the tonotopic map. We confirm this observation in a second experiment, where we find that speech/voice selective regions exhibit a response bias toward the low frequencies characteristic of human voice and speech, even when responding to simple tones. We propose that this frequency bias reflects the selective amplification of relevant and category-characteristic spectral bands, a useful processing step for transforming a sensory (tonotopic) sound image into higher level neural representations.
Martin del Campo, H N; Measor, K R; Razak, K A
Age-related hearing loss (presbycusis) affects ∼35% of humans older than sixty-five years. Symptoms of presbycusis include impaired discrimination of sounds with fast temporal features, such as those present in speech. Such symptoms likely arise because of central auditory system plasticity, but the underlying components are incompletely characterized. The rapid spiking inhibitory interneurons that co-express the calcium binding protein Parvalbumin (PV) are involved in shaping neural responses to fast spectrotemporal modulations. Here, we examined cortical PV expression in the C57bl/6 (C57) mouse, a strain commonly studied as a presbycusis model. We examined if PV expression showed auditory cortical field- and layer-specific susceptibilities with age. The percentage of PV-expressing cells relative to Nissl-stained cells was counted in the anterior auditory field (AAF) and primary auditory cortex (A1) in three age groups: young (1-2 months), middle-aged (6-8 months) and old (14-20 months). There were significant declines in the percentage of cells expressing PV at a detectable level in layers I-IV of both A1 and AAF in the old mice compared to young mice. In layers V-VI, there was an increase in the percentage of PV-expressing cells in the AAF of the old group. There were no changes in percentage of PV-expressing cells in layers V-VI of A1. These data suggest cortical layer(s)- and field-specific susceptibility of PV+ cells with presbycusis. The results are consistent with the hypothesis that a decline in inhibitory neurotransmission, particularly in the superficial cortical layers, occurs with presbycusis. Copyright © 2012 Elsevier B.V. All rights reserved.
Nordmark, Per F; Pruszynski, J Andrew; Johansson, Roland S
Although some brain areas preferentially process information from a particular sensory modality, these areas can also respond to other modalities. Here we used fMRI to show that such responsiveness to tactile stimuli depends on the temporal frequency of stimulation. Participants performed a tactile threshold-tracking task where the tip of either their left or right middle finger was stimulated at 3, 20, or 100 Hz. Whole-brain analysis revealed an effect of stimulus frequency in two regions: the auditory cortex and the visual cortex. The BOLD response in the auditory cortex was stronger during stimulation at hearable frequencies (20 and 100 Hz) whereas the response in the visual cortex was suppressed at infrasonic frequencies (3 Hz). Regardless of which hand was stimulated, the frequency-dependent effects were lateralized to the left auditory cortex and the right visual cortex. Furthermore, the frequency-dependent effects in both areas were abolished when the participants performed a visual task while receiving identical tactile stimulation as in the tactile threshold-tracking task. We interpret these findings in the context of the metamodal theory of brain function, which posits that brain areas contribute to sensory processing by performing specific computations regardless of input modality.
Lamas, Verónica; Alvarado, Juan C.; Carro, Juan; Merchán, Miguel A.
Introduction This study aimed to assess the top-down control of sound processing in the auditory brainstem of rats. Short latency evoked responses were analyzed after unilateral or bilateral ablation of auditory cortex. This experimental paradigm was also used towards analyzing the long-term evolution of post-lesion plasticity in the auditory system and its ability to self-repair. Method Auditory cortex lesions were performed in rats by stereotactically guided fine-needle aspiration of the cerebrocortical surface. Auditory Brainstem Responses (ABR) were recorded at post-surgery day (PSD) 1, 7, 15 and 30. Recordings were performed under closed-field conditions, using click trains at different sound intensity levels, followed by statistical analysis of threshold values and ABR amplitude and latency variables. Subsequently, brains were sectioned and immunostained for GAD and parvalbumin to assess the location and extent of lesions accurately. Results Alterations in ABR variables depended on the type of lesion and post-surgery time of ABR recordings. Accordingly, bilateral ablations caused a statistically significant increase in thresholds at PSD1 and 7 and a decrease in waves amplitudes at PSD1 that recover at PSD7. No effects on latency were noted at PSD1 and 7, whilst recordings at PSD15 and 30 showed statistically significant decreases in latency. Conversely, unilateral ablations had no effect on auditory thresholds or latencies, while wave amplitudes only decreased at PSD1 strictly in the ipsilateral ear. Conclusion Post-lesion plasticity in the auditory system acts in two time periods: short-term period of decreased sound sensitivity (until PSD7), most likely resulting from axonal degeneration; and a long-term period (up to PSD7), with changes in latency responses and recovery of thresholds and amplitudes values. The cerebral cortex may have a net positive gain on the auditory pathway response to sound. PMID:24066057
Full Text Available INTRODUCTION: This study aimed to assess the top-down control of sound processing in the auditory brainstem of rats. Short latency evoked responses were analyzed after unilateral or bilateral ablation of auditory cortex. This experimental paradigm was also used towards analyzing the long-term evolution of post-lesion plasticity in the auditory system and its ability to self-repair. METHOD: Auditory cortex lesions were performed in rats by stereotactically guided fine-needle aspiration of the cerebrocortical surface. Auditory Brainstem Responses (ABR were recorded at post-surgery day (PSD 1, 7, 15 and 30. Recordings were performed under closed-field conditions, using click trains at different sound intensity levels, followed by statistical analysis of threshold values and ABR amplitude and latency variables. Subsequently, brains were sectioned and immunostained for GAD and parvalbumin to assess the location and extent of lesions accurately. RESULTS: Alterations in ABR variables depended on the type of lesion and post-surgery time of ABR recordings. Accordingly, bilateral ablations caused a statistically significant increase in thresholds at PSD1 and 7 and a decrease in waves amplitudes at PSD1 that recover at PSD7. No effects on latency were noted at PSD1 and 7, whilst recordings at PSD15 and 30 showed statistically significant decreases in latency. Conversely, unilateral ablations had no effect on auditory thresholds or latencies, while wave amplitudes only decreased at PSD1 strictly in the ipsilateral ear. CONCLUSION: Post-lesion plasticity in the auditory system acts in two time periods: short-term period of decreased sound sensitivity (until PSD7, most likely resulting from axonal degeneration; and a long-term period (up to PSD7, with changes in latency responses and recovery of thresholds and amplitudes values. The cerebral cortex may have a net positive gain on the auditory pathway response to sound.
Rao, Deepti; Basura, Gregory J; Roche, Joseph; Daniels, Scott; Mancilla, Jaime G; Manis, Paul B
Sensorineural hearing loss during early childhood alters auditory cortical evoked potentials in humans and profoundly changes auditory processing in hearing-impaired animals. Multiple mechanisms underlie the early postnatal establishment of cortical circuits, but one important set of developmental mechanisms relies on the neuromodulator serotonin (5-hydroxytryptamine [5-HT]). On the other hand, early sensory activity may also regulate the establishment of adultlike 5-HT receptor expression and function. We examined the role of 5-HT in auditory cortex by first investigating how 5-HT neurotransmission and 5-HT(2) receptors influence the intrinsic excitability of layer II/III pyramidal neurons in brain slices of primary auditory cortex (A1). A brief application of 5-HT (50 μM) transiently and reversibly decreased firing rates, input resistance, and spike rate adaptation in normal postnatal day 12 (P12) to P21 rats. Compared with sham-operated animals, cochlear ablation increased excitability at P12-P21, but all the effects of 5-HT, except for the decrease in adaptation, were eliminated in both sham-operated and cochlear-ablated rats. At P30-P35, cochlear ablation did not increase intrinsic excitability compared with shams, but it did prevent a pronounced decrease in excitability that appeared 10 min after 5-HT application. We also tested whether the effects on excitability were mediated by 5-HT(2) receptors. In the presence of the 5-HT(2)-receptor antagonist, ketanserin, 5-HT significantly decreased excitability compared with 5-HT or ketanserin alone in both sham-operated and cochlear-ablated P12-P21 rats. However, at P30-P35, ketanserin had no effect in sham-operated and only a modest effect cochlear-ablated animals. The 5-HT(2)-specific agonist 5-methoxy-N,N-dimethyltryptamine also had no effect at P12-P21. These results suggest that 5-HT likely regulates pyramidal cell excitability via multiple receptor subtypes with opposing effects. These data also show that
Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used magnetoencephalography to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; N=19) and older (mean: 64 years; N=20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long inter-stimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared to younger people: in the older group, neural responses continued to be sensitive to sound level under conditions where responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty older adults have with filtering out irrelevant sensory information.Significance statementBehavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used magnetoencephalography to investigate how aging influences adaptation to sound-level statistics. Listeners were presented
Moyer, Caitlin E; Erickson, Susan L; Fish, Kenneth N; Thiels, Edda; Penzes, Peter; Sweet, Robert A
Cortical excitatory and inhibitory synapses are disrupted in schizophrenia, the symptoms of which often emerge during adolescence, when cortical excitatory synapses undergo pruning. In auditory cortex, a brain region implicated in schizophrenia, little is known about the development of excitatory and inhibitory synapses between early adolescence and young adulthood, and how these changes impact auditory cortex function. We used immunohistochemistry and quantitative fluorescence microscopy to quantify dendritic spines and GAD65-expressing inhibitory boutons in auditory cortex of early adolescent, late adolescent, and young adult mice. Numbers of spines decreased between early adolescence and young adulthood, during which time responses increased in an auditory cortex-dependent sensory task, silent gap-prepulse inhibition of the acoustic startle reflex (gap-PPI). Within-bouton GAD65 protein and GAD65-expressing bouton numbers decreased between late adolescence and young adulthood, a delay in onset relative to spine and gap-PPI changes. In mice lacking the spine protein kalirin, there were no significant changes in spine number, within-bouton GAD65 protein, or gap-PPI between adolescence and young adulthood. These results illustrate developmental changes in auditory cortex spines, inhibitory boutons, and auditory cortex function between adolescence and young adulthood, and provide insights into how disrupted adolescent neurodevelopment could contribute to auditory cortex synapse pathology and auditory impairments. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Moyer, Caitlin E.; Erickson, Susan L.; Fish, Kenneth N.; Thiels, Edda; Penzes, Peter; Sweet, Robert A.
Cortical excitatory and inhibitory synapses are disrupted in schizophrenia, the symptoms of which often emerge during adolescence, when cortical excitatory synapses undergo pruning. In auditory cortex, a brain region implicated in schizophrenia, little is known about the development of excitatory and inhibitory synapses between early adolescence and young adulthood, and how these changes impact auditory cortex function. We used immunohistochemistry and quantitative fluorescence microscopy to quantify dendritic spines and GAD65-expressing inhibitory boutons in auditory cortex of early adolescent, late adolescent, and young adult mice. Numbers of spines decreased between early adolescence and young adulthood, during which time responses increased in an auditory cortex-dependent sensory task, silent gap-prepulse inhibition of the acoustic startle reflex (gap-PPI). Within-bouton GAD65 protein and GAD65-expressing bouton numbers decreased between late adolescence and young adulthood, a delay in onset relative to spine and gap-PPI changes. In mice lacking the spine protein kalirin, there were no significant changes in spine number, within-bouton GAD65 protein, or gap-PPI between adolescence and young adulthood. These results illustrate developmental changes in auditory cortex spines, inhibitory boutons, and auditory cortex function between adolescence and young adulthood, and provide insights into how disrupted adolescent neurodevelopment could contribute to auditory cortex synapse pathology and auditory impairments. PMID:25759333
Feenstra, M. G.; Vogel, M.; Botterblom, M. H.; Joosten, R. N.; de Bruin, J. P.
We used bilateral microdialysis in the medial prefrontal cortex (PFC) of awake, freely moving rats to study aversive conditioning to an auditory cue in the controlled environment of the Skinner box. The presentation of the explicit conditioned stimuli (CS), previously associated with foot shocks,
Man, WH; Madeira, Caroline; Zhou, Xiaoming; Merzenich, Michael M; Panizzutti, Rogerio
Tumor necrosis factor- alpha (TNF-α) is likely to play a role in brain plasticity. To determine whether TNF-α levels change throughout a critical period of experience-dependent brain plasticity, we assessed these levels in the primary auditory cortex of rats before, during and after the critical
Dustin H Brewton
Full Text Available Age-related changes in inhibitory neurotransmission in sensory cortex may underlie deficits in sensory function. Perineuronal nets (PNNs are extracellular matrix components that ensheath some inhibitory neurons, particularly parvalbumin positive (PV+ interneurons. PNNs may protect PV+ cells from oxidative stress and help establish their rapid spiking properties. Although PNN expression has been well characterized during development, possible changes in aging sensory cortex have not been investigated. Here we tested the hypothesis that PNN+, PV+ and PV/PNN co-localized cell densities decline with age in the primary auditory cortex (A1. This hypothesis was tested using immunohistochemistry in two strains of mice (C57BL/6 and CBA/CaJ with different susceptibility to age-related hearing loss and at three different age ranges (1-3, 6-8 and 14-24 months old. We report that PNN+ and PV/PNN co-localized cell densities decline significantly with age in A1 in both mouse strains. In the PNN+ cells that remain in the old group, the intensity of PNN staining is reduced in the C57 strain, but not the CBA strain. PV+ cell density also declines only in the C57, but not the CBA, mouse suggesting a potential exacerbation of age-effects by hearing loss in the PV/PNN system. Taken together, these data suggest that PNN deterioration may be a key component of altered inhibition in the aging sensory cortex, that may lead to altered synaptic function, susceptibility to oxidative stress and processing deficits.
Full Text Available GABAergic activity is important in neocortical development and plasticity. Because the maturation of GABAergic interneurons is regulated by neural activity, the source of excitatory inputs to GABAergic interneurons plays a key role in development. We show, by laser-scanning photostimulation, that layer 4 and layer 5 GABAergic interneurons in the auditory cortex in neonatal mice (
Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments.SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the
Zhang, J; Nakamoto, K T; Kitzes, L M
In the natural acoustic environment sounds frequently arrive at the two ears in quick succession. The responses of a cortical neuron to acoustic stimuli can be dramatically altered, usually suppressed, by a preceding sound. The purpose of this study was to determine if the binaural interaction evoked by a preceding sound is involved in subsequent suppressive interactions observed in auditory cortex neurons. Responses of neurons in the primary auditory cortex (AI) exhibiting binaural suppressive interactions (EO/I) were studied in barbiturate-anesthetized cats. For the majority (72.5%) of EO/I neurons studied, the response to a monaural contralateral stimulus was suppressed by a preceding monaural contralateral stimulus, but was not changed by a preceding monaural ipsilateral stimulus. For this subset of EO/I neurons, when a monaural contralateral stimulus was preceded by a binaural stimulus, the level of both the ipsilateral and the contralateral component of the binaural stimulus influenced the response to the subsequent monaural contralateral stimulus. When the contralateral level of the binaural stimulus was constant, increasing its ipsilateral level decreased the suppression of the response to the subsequent monaural contralateral stimulus. When the ipsilateral level of the binaural stimulus was constant, increasing its contralateral level increased the suppression of the response to the subsequent monaural contralateral stimulus. These results demonstrate that the sequential inhibition of responses of AI neurons is a function of the product of a preceding binaural interaction. The magnitude of the response to the contralateral stimulus is related to, but not determined by the magnitude of the response to the preceding binaural stimulus. Possible mechanisms of this sequential interaction are discussed.
Skipper, Jeremy I
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.
Gutschalk, Alexander; Uppenkamp, Stefan
Several studies have shown enhancement of auditory evoked sustained responses for periodic over non-periodic sounds and for vowels over non-vowels. Here, we directly compared pitch and vowels using synthesized speech with a "damped" amplitude modulation. These stimuli were parametrically varied to yield four classes of matched stimuli: (1) periodic vowels (2) non-periodic vowels, (3) periodic non-vowels, and (4) non-periodic non-vowels. 12 listeners were studied with combined MEG and EEG. Sustained responses were reliably enhanced for vowels and periodicity. Dipole source analysis revealed that a vowel contrast (vowel-non-vowel) and the periodicity-pitch contrast (periodic-non-periodic) mapped to the same site in antero-lateral Heschl's gyrus. In contrast, the non-periodic, non-vowel condition mapped to a more medial and posterior site. The sustained enhancement for vowels was significantly more prominent when the vowel identity was varied, compared to a condition where only one vowel was repeated, indicating selective adaptation of the response. These results render it unlikely that there are spatially distinct fields for vowel and pitch processing in the auditory cortex. However, the common processing of vowels and pitch raises the possibility that there is an early speech-specific field in Heschl's gyrus. Copyright © 2011 Elsevier Inc. All rights reserved.
Chen, Xianming; Wang, Maoxin; Deng, Yihong; Liang, Yonghui; Li, Jianzhong; Chen, Shiyan
Contralateral temporal lobe activation decreases with aging, regardless of hearing status, with elderly individuals showing reduced right ear advantage. Aging and hearing loss possibly lead to presbycusis speech discrimination decline. To evaluate presbycusis patients' auditory cortex activation under verbal stimulation. Thirty-six patients were enrolled: 10 presbycusis patients (mean age = 64 years, range = 60-70), 10 in the healthy aged group (mean age = 66 years, range = 60-70), and 16 young healthy volunteers (mean age = 25 years, range = 23-28). These three groups underwent simultaneous 1 kHz and 90 dB single-syllable word stimuli and (blood-oxygen-level-dependent functional magnetic resonance imaging) BOLD fMRI examinations. The main activation regions were superior temporal and middle temporal gyrus. For all aged subjects, the right region of interest (ROI) activation volume was decreased compared with the young group. With left ear stimulation, bilateral ROI activation intensity held. With right ear stimulation, the aged group's activation intensity was higher. Using monaural stimulation in the young group, contralateral temporal lobe activation volume and intensity were higher vs ipsilateral, while they were lower in the aged and presbycusis groups. On left and right ear auditory tasks, the young group showed right ear advantage, while the aged and presbycusis groups showed reduced right ear advantage.
Gaucher, Quentin; Edeline, Jean-Marc
Many studies have described the action of Noradrenaline (NA) on the properties of cortical receptive fields, but none has assessed how NA affects the discrimination abilities of cortical cells between natural stimuli. In the present study, we compared the consequences of NA topical application on spectro-temporal receptive fields (STRFs) and responses to communication sounds in the primary auditory cortex. NA application reduced the STRFs (an effect replicated by the alpha1 agonist Phenylephrine) but did not change, on average, the responses to communication sounds. For cells exhibiting increased evoked responses during NA application, the discrimination abilities were enhanced as quantified by Mutual Information. The changes induced by NA on parameters extracted from the STRFs and from responses to communication sounds were not related. The alterations exerted by neuromodulators on neuronal selectivity have been the topic of a vast literature in the visual, somatosensory, auditory and olfactory cortices. However, very few studies have investigated to what extent the effects observed when testing these functional properties with artificial stimuli can be transferred to responses evoked by natural stimuli. Here, we tested the effect of noradrenaline (NA) application on the responses to pure tones and communication sounds in the guinea-pig primary auditory cortex. When pure tones were used to assess the spectro-temporal receptive field (STRF) of cortical cells, NA triggered a transient reduction of the STRFs in both the spectral and the temporal domain, an effect replicated by the α1 agonist phenylephrine whereas α2 and β agonists induced STRF expansion. When tested with communication sounds, NA application did not produce significant effects on the firing rate and spike timing reliability, despite the fact that α1, α2 and β agonists by themselves had significant effects on these measures. However, the cells whose evoked responses were increased by NA
Gilbert, Heather J; Shackleton, Trevor M; Krumbholz, Katrin; Palmer, Alan R
The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12-15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population. Copyright © 2015 Gilbert et al.
Basura, Gregory J; Abbas, Atheir I; O'Donohue, Heather; Lauder, Jean M; Roth, Bryan L; Walker, Paul D; Manis, Paul B
Maturation of the mammalian cerebral cortex is, in part, dependent upon multiple coordinated afferent neurotransmitter systems and receptor-mediated cellular linkages during early postnatal development. Given that serotonin (5-HT) is one such system, the present study was designed to specifically evaluate 5-HT tissue content as well as 5-HT(2A) receptor protein levels within the developing auditory cortex (AC). Using high performance liquid chromatography (HPLC), 5-HT and the metabolite, 5-hydroxyindoleacetic acid (5-HIAA), was measured in isolated AC, which demonstrated a developmental dynamic, reaching young adult levels early during the second week of postnatal development. Radioligand binding of 5-HT(2A) receptors with the 5-HT(2A/2C) receptor agonist, (125)I-DOI ((+/-)-1-(2,5-dimethoxy-4-iodophenyl)-2-aminopropane HCl; in the presence of SB206553, a selective 5-HT(2C) receptor antagonist, also demonstrated a developmental trend, whereby receptor protein levels reached young adult levels at the end of the first postnatal week (P8), significantly increased at P10 and at P17, and decreased back to levels not significantly different from P8 thereafter. Immunocytochemical labeling of 5-HT(2A) receptors and confocal microscopy revealed that 5-HT(2A) receptors are largely localized on layer II/III pyramidal cell bodies and apical dendrites within AC. When considered together, the results of the present study suggest that 5-HT, likely through 5-HT(2A) receptors, may play an important role in early postnatal AC development.
Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.
Ni, Ruiye; Bender, David A; Shanechi, Amirali M; Gamble, Jeffrey R; Barbour, Dennis L
Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper
Ma, Lanlan; Li, Wai; Li, Sibin; Wang, Xuejiao; Qin, Ling
A fundamental adaptive mechanism of auditory function is inhibitory gating (IG), which refers to the attenuation of neural responses to repeated sound stimuli. IG is drastically impaired in individuals with emotional and cognitive impairments (i.e. posttraumatic stress disorder). The objective of this study was to test whether chronic stress impairs the IG of the auditory cortex (AC). We used the standard two-tone stimulus paradigm and examined the parametric qualities of IG in the AC of rats by recording the electrophysiological signals of a single-unit and local field potential (LFP) simultaneously. The main results of this study were that most of the AC neurons showed a weaker response to the second tone than to the first tone, reflecting an IG of the repeated input. A fast negative wave of LFP showed consistent IG across the sampled AC sites, whereas a slow positive wave of LFP had less IG effect. IG was diminished following chronic restraint stress at both, the single-unit and LFP level, due to the increase in response to the second tone. This study provided new evidence that chronic stress disrupts the physiological function of the AC. Lay Summary The effects of chronic stress on IG were investigated by recording both, single-unit spike and LFP activities, in the AC of rats. In normal rats, most of the single-unit and N25 LFP activities in the AC showed an IG effect. IG was diminished following chronic restraint stress at both, the single-unit and LFP level.
Razak, Khaleel A; Fuzessery, Zoltan M
This report maps the organization of the primary auditory cortex of the pallid bat in terms of frequency tuning, selectivity for behaviorally relevant sounds, and interaural intensity difference (IID) sensitivity. The pallid bat is unusual in that it localizes terrestrial prey by passively listening to prey-generated noise transients (1-20 kHz), while reserving high-frequency (neurons (83%) tuned neurons (62%) tuned >30 kHz responded selectively or exclusively to the 60- to 30-kHz downward frequency-modulated (FM) sweep used for echolocation. Within the low-frequency region, neurons were placed in two groups that occurred in two separate clusters: those selective for low- or high-frequency band-pass noise and suppressed by broadband noise, and neurons that showed no preference for band-pass noise over broadband noise. Neurons were organized in homogeneous clusters with respect to their binaural response properties. The distribution of binaural properties differed in the noise- and FM sweep-preferring regions, suggesting task-dependent differences in binaural processing. The low-frequency region was dominated by a large cluster of binaurally inhibited neurons with a smaller cluster of neurons with mixed binaural interactions. The FM sweep-selective region was dominated by neurons with mixed binaural interactions or monaural neurons. Finally, this report describes a cortical substrate for systematic representation of a spatial cue, IIDs, in the low-frequency region. This substrate may underlie a population code for sound localization based on a systematic shift in the distribution of activity across the cortex with sound source location.
Pienkowski, Martin; Eggermont, Jos J
It has become increasingly clear that even occasional exposure to loud sounds in occupational or recreational settings can cause irreversible damage to the hair cells of the cochlea and the auditory nerve fibers, even if the resulting partial loss of hearing sensitivity, usually accompanied by tinnitus, disappears within hours or days of the exposure. Such exposure may explain at least some cases of poor speech intelligibility in noise in the face of a normal or near-normal audiogram. Recent findings from our laboratory suggest that long-term changes to auditory brain function-potentially leading to problems with speech intelligibility-can be effected by persistent, passive exposure to more moderate levels of noise (in the 70 dB SPL range) in the apparent absence of damage to the auditory periphery (as reflected in normal distortion product otoacoustic emissions and auditory brainstem responses). Specifically, passive exposure of adult cats to moderate levels of band-pass-filtered noise, or to band-limited ensembles of dense, random tone pips, can lead to a profound decrease of neural activity in the auditory cortex roughly in the exposure frequency range, and to an increase of activity outside that range. This can progress to an apparent reorganization of the cortical tonotopic map, which is reminiscent of the reorganization resulting from hearing loss restricted to a part of the hearing frequency range, although again, no hearing loss was apparent after our moderate-level sound exposure. Here, we review this work focusing specifically on the potential hearing problems that may arise despite a normally functioning auditory periphery.
Overall, these results show that continuous feedback is suitable for long-term neurofeedback experiments while intermittent feedback presentation promises good results for single session experiments when using the auditory cortex as a target region. In particular, the down-regulation effect is more pronounced in the secondary auditory cortex, which might be more susceptible to voluntary modulation in comparison to a primary sensory region.
Kelly H. Chang
Full Text Available Here we show that, using functional magnetic resonance imaging (fMRI blood-oxygen level dependent (BOLD responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”. By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.
Mrsic-Flogel, Thomas D; King, Andrew J; Schnupp, Jan W H
Recent studies from our laboratory have indicated that the spatial response fields (SRFs) of neurons in the ferret primary auditory cortex (A1) with best frequencies > or =4 kHz may arise from a largely linear processing of binaural level and spectral localization cues. Here we extend this analysis to investigate how well the linear model can predict the SRFs of neurons with different binaural response properties and the manner in which SRFs change with increases in sound level. We also consider whether temporal features of the response (e.g., response latency) vary with sound direction and whether such variations can be explained by linear processing. In keeping with previous studies, we show that A1 SRFs, which we measured with individualized virtual acoustic space stimuli, expand and shift in direction with increasing sound level. We found that these changes are, in most cases, in good agreement with predictions from a linear threshold model. However, changes in spatial tuning with increasing sound level were generally less well predicted for neurons whose binaural frequency-time receptive field (FTRF) exhibited strong excitatory inputs from both ears than for those in which the binaural FTRF revealed either a predominantly inhibitory effect or no clear contribution from the ipsilateral ear. Finally, we found (in agreement with other authors) that many A1 neurons exhibit systematic response latency shifts as a function of sound-source direction, although these temporal details could usually not be predicted from the neuron's binaural FTRF.
Harinen, Kirsi; Rinne, Teemu
We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels. Copyright © 2013 Elsevier Inc. All rights reserved.
Christison-Lagay, Kate L; Bennur, Sharath; Cohen, Yale E
A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision. Copyright © 2017 the American Physiological Society.
Liu, Xiuping; Zhou, Linran; Ding, Fangchao; Wang, Yehan; Yan, Jun
Local field potentials (LFPs) and spikes (SPKs) sampled at the thalamocortical recipient layers represent the inputs from the thalamus and outputs to other layers. Previous studies have shown that SPK-constructed receptive fields (RFSPK) of cortical neurons are much smaller than LFP-constructed RFs (RFLFP). The difference in cortical RFLFP and RFSPK is therefore a plausible indication of local networking. The presence of a boarder RFLFP appears due to contamination, to some degree, from remote sites. Our studies of the mouse primary auditory cortex show that the best frequencies and minimum thresholds of RFSPK and RFLFP were similar. We also observed that the RFLFP area was only slightly larger than the RFSPK area, a very different finding from previous reports. The bandwidth of RFLFP was slightly broader than that of RFSPK at all levels. These data do not support the explanation that bioelectrical signals from distant sites impact on cortical LFP through volume conduction. That the cortical LFP represents a local event is further supported by comparisons of RFSPK and RFLFP after cortical inhibition by muscimol and cortical disinhibition by bicuculine. We conclude that the difference between RFSPK (output of cortical neurons) and RFLFP (input of cortical neurons) results from intracortical processing, including cortical lateral inhibition and excitation. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W; Hung, An-Yi; Jääskeläinen, Iiro P; Rauschecker, Josef P; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55-145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.
Koelsch, Stefan; Skouras, Stavros; Lohmann, Gabriele
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with "small-world" properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex-and sensory systems in general-in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions.
Andoh, Jamila; Zatorre, Robert J
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this
Anderson, Carly A; Lazard, Diane S; Hartley, Douglas E H
While many individuals can benefit substantially from cochlear implantation, the ability to perceive and understand auditory speech with a cochlear implant (CI) remains highly variable amongst adult recipients. Importantly, auditory performance with a CI cannot be reliably predicted based solely on routinely obtained information regarding clinical characteristics of the CI candidate. This review argues that central factors, notably cortical function and plasticity, should also be considered as important contributors to the observed individual variability in CI outcome. Superior temporal cortex (STC), including auditory association areas, plays a crucial role in the processing of auditory and visual speech information. The current review considers evidence of cortical plasticity within bilateral STC, and how these effects may explain variability in CI outcome. Furthermore, evidence of audio-visual interactions in temporal and occipital cortices is examined, and relation to CI outcome is discussed. To date, longitudinal examination of changes in cortical function and plasticity over the period of rehabilitation with a CI has been restricted by methodological challenges. The application of functional near-infrared spectroscopy (fNIRS) in studying cortical function in CI users is becoming increasingly recognised as a potential solution to these problems. Here we suggest that fNIRS offers a powerful neuroimaging tool to elucidate the relationship between audio-visual interactions, cortical plasticity during deafness and following cochlear implantation, and individual variability in auditory performance with a CI. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Liégeois-Chauvel, Catherine; Bénar, Christian; Krieg, Julien; Delbé, Charles; Chauvel, Patrick; Giusiano, Bernard; Bigand, Emmanuel
Music is a sound structure of remarkable acoustical and temporal complexity. Although it cannot denote specific meaning, it is one of the most potent and universal stimuli for inducing mood. How the auditory and limbic systems interact, and whether this interaction is lateralized when feeling emotions related to music, remains unclear. We studied the functional correlation between the auditory cortex (AC) and amygdala (AMY) through intracerebral recordings from both hemispheres in a single patient while she listened attentively to musical excerpts, which we compared to passive listening of a sequence of pure tones. While the left primary and secondary auditory cortices (PAC and SAC) showed larger increases in gamma-band responses than the right side, only the right side showed emotion-modulated gamma oscillatory activity. An intra- and inter-hemisphere correlation was observed between the auditory areas and AMY during the delivery of a sequence of pure tones. In contrast, a strikingly right-lateralized functional network between the AC and the AMY was observed to be related to the musical excerpts the patient experienced as happy, sad and peaceful. Interestingly, excerpts experienced as angry, which the patient disliked, were associated with widespread de-correlation between all the structures. These results suggest that the right auditory-limbic interactions result from the formation of oscillatory networks that bind the activities of the network nodes into coherence patterns, resulting in the emergence of a feeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Altmann, Christian F; Terada, Satoshi; Kashino, Makio; Goto, Kazuhiro; Mima, Tatsuya; Fukuyama, Hidenao; Furukawa, Shigeto
Sound localization in the horizontal plane is mainly determined by interaural time differences (ITD) and interaural level differences (ILD). Both cues result in an estimate of sound source location and in many real-life situations these two cues are roughly congruent. When stimulating listeners with headphones it is possible to counterbalance the two cues, so called ITD/ILD trading. This phenomenon speaks for integrated ITD/ILD processing at the behavioral level. However, it is unclear at what stages of the auditory processing stream ITD and ILD cues are integrated to provide a unified percept of sound lateralization. Therefore, we set out to test with human electroencephalography for integrated versus independent ITD/ILD processing at the level of preattentive cortical processing by measuring the mismatch negativity (MMN) to changes in sound lateralization. We presented a series of diotic standards (perceived at a midline position) that were interrupted by deviants that entailed either a change in a) ITD only, b) ILD only, c) congruent ITD and ILD, or d) counterbalanced ITD/ILD (ITD/ILD trading). The sound stimuli were either i) pure tones with a frequency of 500 Hz, or ii) amplitude modulated tones with a carrier frequency of 4000 Hz and a modulation frequency of 125 Hz. We observed significant MMN for the ITD/ILD traded deviants in case of the 500 Hz pure tones, and for the 4000 Hz amplitude-modulated tone. This speaks for independent processing of ITD and ILD at the level of the MMN within auditory cortex. However, the combined ITD/ILD cues elicited smaller MMN than the sum of the MMN induced in response to ITD and ILD cues presented in isolation for 500 Hz, but not 4000 Hz, suggesting independent processing for the higher frequency only. Thus, the two markers for independent processing - additivity and cue-conflict - resulted in contradicting conclusions with a dissociation between the lower (500 Hz) and higher frequency (4000 Hz) bands. Copyright © 2014
Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde
Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.
Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.
Josue G. Yague
Full Text Available The basal forebrain (BF has long been implicated in attention, learning and memory, and recent studies have established a causal relationship between artificial BF activation and arousal. However, neural ensemble dynamics in the BF still remains unclear. Here, recording neural population activity in the BF and comparing it with simultaneously recorded cortical population under both anesthetized and unanesthetized conditions, we investigate the difference in the structure of spontaneous population activity between the BF and the auditory cortex (AC in mice. The AC neuronal population show a skewed spike rate distribution, a higher proportion of short (≤80 ms inter-spike intervals (ISIs and a rich repertoire of rhythmic firing across frequencies. Although the distribution of spontaneous firing rate in the BF is also skewed, a proportion of short ISIs can be explained by a Poisson model at short time scales (≤20 ms and spike count correlations are lower compared to AC cells, with optogenetically identified cholinergic cell pairs showing exceptionally higher correlations. Furthermore, a smaller fraction of BF neurons shows spike-field entrainment across frequencies: a subset of BF neurons fire rhythmically at slow (≤6 Hz frequencies, with varied phase preferences to ongoing field potentials, in contrast to a consistent phase preference of AC populations. Firing of these slow rhythmic BF cells is correlated to a greater degree than other rhythmic BF cell pairs. Overall, the fundamental difference in the structure of population activity between the AC and BF is their temporal coordination, in particular their operational timescales. These results suggest that BF neurons slowly modulate downstream populations whereas cortical circuits transmit signals on multiple timescales. Thus, the characterization of the neural ensemble dynamics in the BF provides further insight into the neural mechanisms, by which brain states are regulated.
Stecker, G Christopher; McLaughlin, Susan A; Higgins, Nathan C
Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values. Copyright © 2015. Published by Elsevier Inc.
Rinne, Teemu; Koistinen, Sonja; Talja, Suvi; Wikman, Patrik; Salonen, Oili
In the present study, we applied high-resolution functional magnetic resonance imaging (fMRI) of the human auditory cortex (AC) and adjacent areas to compare activations during spatial discrimination and spatial n-back memory tasks that were varied parametrically in difficulty. We found that activations in the anterior superior temporal gyrus (STG) were stronger during spatial discrimination than during spatial memory, while spatial memory was associated with stronger activations in the inferior parietal lobule (IPL). We also found that wide AC areas were strongly deactivated during the spatial memory tasks. The present AC activation patterns associated with spatial discrimination and spatial memory tasks were highly similar to those obtained in our previous study comparing AC activations during pitch discrimination and pitch memory (Rinne et al., 2009). Together our previous and present results indicate that discrimination and memory tasks activate anterior and posterior AC areas differently and that this anterior-posterior division is present both when these tasks are performed on spatially invariant (pitch discrimination vs. memory) or spatially varying (spatial discrimination vs. memory) sounds. These results also further strengthen the view that activations of human AC cannot be explained only by stimulus-level parameters (e.g., spatial vs. nonspatial stimuli) but that the activations observed with fMRI are strongly dependent on the characteristics of the behavioral task. Thus, our results suggest that in order to understand the functional structure of AC a more systematic investigation of task-related factors affecting AC activations is needed. Copyright © 2011 Elsevier Inc. All rights reserved.
Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning
Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy
Firszt, Jill B.; Reeder, Ruth M.; Holden, Timothy A.; Harold eBurton; Chole, Richard A.
Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants), less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, ...
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Engineer, C. T.; Centanni, T. M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R. S.; Wilson, L. G.; Kilgard, M. P.
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits...
Satoh, Masayuki; Kato, Natsuko; Tabei, Ken-Ichi; Nakano, Chizuru; Abe, Makiko; Fujita, Risa; Kida, Hirotaka; Tomimoto, Hidekazu; Kondo, Kiyohiko
A 63-year-old, right-handed professional chorus conductor developed right putaminal hemorrhage, and became unable to experience emotion while listening to music. Two years later, neurological examination revealed slight left hemiparesis. Neuromusicological assessments revealed impaired judgment of "musical sense," and the inability to discriminate the sound of chords in pure intervals from those in equal temperament. Brain MRI and tractography identified the old hemorrhagic lesion in the right putamen and impaired fiber connectivity between the right insula and superior temporal lobe. These findings suggest that musical anhedonia might be caused by a disconnection between the insula and auditory cortex.
Lomber, Stephen G; Meredith, M Alex; Kral, Andrej
This chapter is a summary of three interdigitated investigations to identify the neural substrate underlying supranormal vision in the congenitally deaf. In the first study, we tested both congenitally deaf and hearing cats on a battery of visual psychophysical tasks to identify those visual functions that are enhanced in the congenitally deaf. From this investigation, we found that congenitally deaf, compared to hearing, cats have superior visual localization in the peripheral field and lower visual movement detection thresholds. In the second study, we examined the role of "deaf" auditory cortex in mediating the supranormal visual abilities by reversibly deactivating specific cortical loci with cooling. We identified that in deaf cats, reversible deactivation of a region of cortex typically identified as the posterior auditory field (PAF) in hearing cats selectively eliminated superior visual localization abilities. It was also found that deactivation of the dorsal zone (DZ) of "auditory" cortex eliminated the superior visual motion detection abilities of deaf cats. In the third study, graded cooling was applied to deaf PAF and deaf DZ to examine the laminar contributions to the superior visual abilities of the deaf. Graded cooling of deaf PAF revealed that deactivation of the superficial layers alone does not cause significant visual localization deficits. Profound deficits were identified only when cooling extended through all six layers of deaf PAF. In contrast, graded cooling of deaf DZ showed that deactivation of only the superficial layers was required to elicit increased visual motion detection thresholds. Collectively, these three studies show that the superficial layers of deaf DZ mediate the enhanced visual motion detection of the deaf, while the full thickness of deaf PAF must be deactivated in order to eliminate the superior visual localization abilities of the congenitally deaf. Taken together, this combination of experimental approaches has
Full Text Available Abstract Background Primary auditory cortex (AI neurons show qualitatively distinct response features to successive acoustic signals depending on the inter-stimulus intervals (ISI. Such ISI-dependent AI responses are believed to underlie, at least partially, categorical perception of click trains (elemental vs. fused quality and stop consonant-vowel syllables (eg.,/da/-/ta/continuum. Methods Single unit recordings were conducted on 116 AI neurons in awake cats. Rectangular clicks were presented either alone (single click paradigm or in a train fashion with variable ISI (2–480 ms (click-train paradigm. Response features of AI neurons were quantified as a function of ISI: one measure was related to the degree of stimulus locking (temporal modulation transfer function [tMTF] and another measure was based on firing rate (rate modulation transfer function [rMTF]. An additional modeling study was performed to gain insight into neurophysiological bases of the observed responses. Results In the click-train paradigm, the majority of the AI neurons ("synchronization type"; n = 72 showed stimulus-locking responses at long ISIs. The shorter cutoff ISI for stimulus-locking responses was on average ~30 ms and was level tolerant in accordance with the perceptual boundary of click trains and of consonant-vowel syllables. The shape of tMTF of those neurons was either band-pass or low-pass. The single click paradigm revealed, at maximum, four response periods in the following order: 1st excitation, 1st suppression, 2nd excitation then 2nd suppression. The 1st excitation and 1st suppression was found exclusively in the synchronization type, implying that the temporal interplay between excitation and suppression underlies stimulus-locking responses. Among these neurons, those showing the 2nd suppression had band-pass tMTF whereas those with low-pass tMTF never showed the 2nd suppression, implying that tMTF shape is mediated through the 2nd suppression. The
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line
Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E
.... Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant...
R Channing Moore; Tyler Lee; Frédéric E Theunissen
.... Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant...
Centanni, T M; Booker, A B; Sloan, A M; Chen, F; Maher, B J; Carraway, R S; Khodaparast, N; Rennaker, R; LoTurco, J J; Kilgard, M P
One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Orekhova, Elena V.; Tsetlin, Marina M.; Butorina, Anna V.; Novikova, Svetlana I.; Gratchev, Vitaliy V.; Sokolov, Pavel A.; Elam, Mikael; Stroganova, Tatiana A.
Auditory sensory modulation difficulties are common in autism spectrum disorders (ASD) and may stem from a faulty arousal system that compromises the ability to regulate an optimal response. To study neurophysiological correlates of the sensory modulation difficulties, we recorded magnetic field responses to clicks in 14 ASD and 15 typically developing (TD) children. We further analyzed the P100m, which is the most prominent component of the auditory magnetic field response in children and ma...
Trujillo, Michael; Carrasco, Maria Magdalena; Razak, Khaleel
This study focused on the response properties underlying selectivity for the rate of frequency modulated (FM) sweeps in the auditory cortex of anesthetized C57bl/6 (C57) mice. Linear downward FM sweeps with rates between 0.08 and 20 kHz/ms were tested. We show that at least two different response properties predict FM rate selectivity: sideband inhibition and duration tuning. Sideband inhibition was determined using the two-tone inhibition paradigm in which excitatory and inhibitory tones were presented with different delays. Sideband inhibition was present in the majority (88%, n = 53) of neurons. The spectrotemporal properties of sideband inhibition predicted rate selectivity and exclusion of the sideband from the sweep reduced/eliminated rate tuning. The second property predictive of sweep rate selectivity was duration tuning for tones. Theoretically, if a neuron is selective for the duration that a sweep spends in the excitatory frequency tuning curve, then rate selectivity will ensue. Duration tuning for excitatory tones was present and predicted rate selectivity in ∼34% of neurons (n = 97). Both sideband inhibition and duration tuning predicted rate selectivity equally well, but sideband inhibition was present in a larger percentage of neurons suggesting that it is the dominant mechanism in the C57 mouse auditory cortex. Similar mechanisms shape sweep rate selectivity in the auditory system of bats and mice and movement-velocity selectivity in the visual system, suggesting similar solutions to analogous problems across sensory systems. This study provides baseline data on basic spectrotemporal processing in the C57 strain for elucidation of changes that occur in presbycusis. Copyright © 2013 Elsevier B.V. All rights reserved.
Full Text Available We investigated the modulation of lateral inhibition in the human auditory cortex by means of magnetoencephalography (MEG. In the first experiment, five acoustic masking stimuli (MS, consisting of noise passing through a digital notch filter which was centered at 1 kHz, were presented. The spectral energy contrasts of four MS were modified systematically by either amplifying or attenuating the edge-frequency bands around the notch (EFB by 30 dB. Additionally, the width of EFB amplification/attenuation was varied (3/8 or 7/8 octave on each side of the notch. N1m and auditory steady state responses (ASSR, evoked by a test stimulus with a carrier frequency of 1 kHz, were evaluated. A consistent dependence of N1m responses upon the preceding MS was observed. The minimal N1m source strength was found in the narrowest amplified EFB condition, representing pronounced lateral inhibition of neurons with characteristic frequencies corresponding to the center frequency of the notch (NOTCH CF in secondary auditory cortical areas. We tested in a second experiment whether an even narrower bandwidth of EFB amplification would result in further enhanced lateral inhibition of the NOTCH CF. Here three MS were presented, two of which were modified by amplifying 1/8 or 1/24 octave EFB width around the notch. We found that N1m responses were again significantly smaller in both amplified EFB conditions as compared to the NFN condition. To our knowledge, this is the first study demonstrating that the energy and width of the EFB around the notch modulate lateral inhibition in human secondary auditory cortical areas. Because it is assumed that chronic tinnitus is caused by a lack of lateral inhibition, these new insights could be used as a tool for further improvement of tinnitus treatments focusing on the lateral inhibition of neurons corresponding to the tinnitus frequency, such as the tailor-made notched music training.
Basura, Gregory J; Koehler, Seth D; Shore, Susan E
Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. Copyright © 2015 the American Physiological Society.
Stein, Alwina; Engell, Alva; Okamoto, Hidehiko; Wollbrink, Andreas; Lau, Pia; Wunderlich, Robert; Rudack, Claudia; Pantev, Christo
We investigated the modulation of lateral inhibition in the human auditory cortex by means of magnetoencephalography (MEG). In the first experiment, five acoustic masking stimuli (MS), consisting of noise passing through a digital notch filter which was centered at 1 kHz, were presented. The spectral energy contrasts of four MS were modified systematically by either amplifying or attenuating the edge-frequency bands around the notch (EFB) by 30 dB. Additionally, the width of EFB amplification/attenuation was varied (3/8 or 7/8 octave on each side of the notch). N1m and auditory steady state responses (ASSR), evoked by a test stimulus with a carrier frequency of 1 kHz, were evaluated. A consistent dependence of N1m responses upon the preceding MS was observed. The minimal N1m source strength was found in the narrowest amplified EFB condition, representing pronounced lateral inhibition of neurons with characteristic frequencies corresponding to the center frequency of the notch (NOTCH CF) in secondary auditory cortical areas. We tested in a second experiment whether an even narrower bandwidth of EFB amplification would result in further enhanced lateral inhibition of the NOTCH CF. Here three MS were presented, two of which were modified by amplifying 1/8 or 1/24 octave EFB width around the notch. We found that N1m responses were again significantly smaller in both amplified EFB conditions as compared to the NFN condition. To our knowledge, this is the first study demonstrating that the energy and width of the EFB around the notch modulate lateral inhibition in human secondary auditory cortical areas. Because it is assumed that chronic tinnitus is caused by a lack of lateral inhibition, these new insights could be used as a tool for further improvement of tinnitus treatments focusing on the lateral inhibition of neurons corresponding to the tinnitus frequency, such as the tailor-made notched music training.
Fallon, James B; Irvine, Dexter R F; Shepherd, Robert K
Electrical stimulation of spiral ganglion neurons in a deafened cochlea, via a cochlear implant, provides a means of investigating the effects of the removal and subsequent restoration of afferent input on the functional organization of the primary auditory cortex (AI). We neonatally deafened 17 cats before the onset of hearing, thereby abolishing virtually all afferent input from the auditory periphery. In seven animals the auditory pathway was chronically reactivated with environmentally derived electrical stimuli presented via a multichannel intracochlear electrode array implanted at 8 weeks of age. Electrical stimulation was provided by a clinical cochlear implant that was used continuously for periods of up to 7 months. In 10 long-term deafened cats and three age-matched normal-hearing controls, an intracochlear electrode array was implanted immediately prior to cortical recording. We recorded from a total of 812 single unit and multiunit clusters in AI of all cats as adults using a combination of single tungsten and multichannel silicon electrode arrays. The absence of afferent activity in the long-term deafened animals had little effect on the basic response properties of AI neurons but resulted in complete loss of the normal cochleotopic organization of AI. This effect was almost completely reversed by chronic reactivation of the auditory pathway via the cochlear implant. We hypothesize that maintenance or reestablishment of a cochleotopically organized AI by activation of a restricted sector of the cochlea, as demonstrated in the present study, contributes to the remarkable clinical performance observed among human patients implanted at a young age.
Liu, Shu-Yun; Deng, Li-Qiang; Yang, Ye; Yin, Ze-Deng
To observe the expression of catechol-O-methyltransferase (COMT) in inferior colliculus and auditory cortex of guinea pigs with age-related hearing loss(AHL) induced by D-galactose, so as to explore the possible mechanism of electroacupuncture(EA) underlying preventing AHL. Thirty 3-month-old guinea pigs were randomly divided into control group, model group and EA group(n=10 in each group), and ten 18-month-old guinea pigs were allocated as elderly group. The AHL model was established by subcutaneous injection of D-galactose. EA was applied to bilateral "Yifeng"(SJ 17) and "Tinggong"(SI 19) for 15 min in the EA group while modeling, once daily for 6 weeks. After treatment, the latency of auditory brainstem response(ABR) Ⅲ wave was measured by a brain-stem evoked potentiometer. The expressions of COMT in the inferior colliculus and auditory cortex were detected by Western blot. Compared with the control group, the latencies of ABR Ⅲ wave were significantly prolonged and the expressions of COMT in the inferior colliculus and auditory cortex were significantly decreased in the model group and the elderly group(P<0.05). After the treatment, the latency of ABR Ⅲ wave was significantly shortened and the expressions of COMT in the inferior colliculus and auditory cortex were significantly increased in the EA group in comparison with the model group (P<0.05). EA at "Yifeng" (SJ 17) and "Tinggong" (SI 19) can improve the hearing of age-related deafness in guinea pigs, which may contribute to its effect in up-regulating the expression of COMT in the inferior colliculus and auditory cortex.
Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Mouterde, Solveig C; Elie, Julie E; Mathevon, Nicolas; Theunissen, Frédéric E
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information
Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.
Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…
Galván, Veronica V; Weinberger, Norman M
The major goal of this study was to determine whether classical conditioning produces long-term neural consolidation of frequency tuning plasticity in the auditory cortex. Local field potentials (LFPs) were obtained from chronically implanted adult male Hartley guinea pigs that were divided into conditioning (n = 4) and sensitization control (n = 3) groups. Tuning functions were determined in awake subjects for average LFPs (approximately 0.4 to 36.0 kHz, -20 to 80 dB) immediately before training as well as 1 h and 1, 3, 7, and 10 days after training; sensitization subjects did not have a 10-day retention test. Conditioning consisted of a single session of 30 to 45 trials of a 6-s tone (CS, 70 dB) that was not the best frequency (BF, peak of a tuning curve), followed by a brief leg shock (US) at CS offset. Sensitization control animals received the same density of CS and US presentations unpaired. Heart rate recordings showed that the conditioning group developed conditioned bradycardia, whereas the sensitization control group did not. Local field potentials in the conditioning group, but not in the sensitization group, developed tuning plasticity. The ratio of responses to the CS frequency versus the BF were increased 1 h after training, and this increase was retained for the 10-day period of the study. Both tuning plasticity and retention were observed across stimulus levels (10-80 dB). Most noteworthy, tuning plasticity exhibited consolidation (i.e., developed greater CS-specific effects across retention periods), attaining asymptote at 3 days. The findings indicate that LFPs in the auditory cortex have three cardinal features of behavioral memory: associative tuning plasticity, long-term retention, and long-term consolidation. Potential cellular and subcellular mechanisms of LFP tuning plasticity and long-term consolidation are discussed. Copyright 2002 Elsevier Science.
Full Text Available We aimed to determine the value of the paired-pulse inhibition (PPI in the auditory cortex in patients with Parkinson's disease (PD and analyze its dependence on clinical characteristics of the patients. The central (Cz auditory evoked potentials were recorded in 58 patients with PD and 22 age-matched healthy subjects. PPI of the N1/P2 component was significantly (P<.001 reduced for interstimulus intervals 500, 700, and 900 ms in patients with PD compared to control subjects. The value of PPI correlated negatively with the age of the PD patients (P<.05, age of disease onset (P<.05, body bradykinesia score (P<.01, and positively with the Mini Mental State Examination (MMSE cognitive score (P<.01. Negative correlation between value of PPI and the age of the healthy subjects (P<.05 was also observed. Thus, results show that cortical inhibitory processes are deficient in PD patients and that the brain's ability to carry out the postexcitatory inhibition is age-dependent.
Full Text Available In both humans and rodents, decline in cognitive function is a hallmark of the aging process, the basis for this decrease has yet to be fully characterized. However, using aged rodent models, deficits in auditory processing have been associated with significant decreases in inhibitory signaling attributed to a loss of GABAergic interneurons. Not only are these interneurons crucial for pattern detection and other large-scale population dynamics, but they have also been linked to mechanisms mediating plasticity and learning, making them a prime candidate for study and modelling of modifications to cortical communication pathways in neurodegenerative diseases. Using the rat primary auditory cortex (A1 as a model, we probed the known markers of GABAergic interneurons with immunohistological methods, using antibodies against gamma aminobutyric acid (GABA, parvalbumin (PV, somatostatin (SOM, calretinin (CR, vasoactive intestinal peptide (VIP, choline acetyltransferase (ChAT, neuropeptide Y (NPY and cholecystokinin (CCK to document the changes observed in interneuron populations across the rat’s lifespan. This analysis provided strong evidence that several but not all GABAergic neurons were affected by the aging process, showing most dramatic changes in expression of parvalbumin (PV and somatostatin (SOM expression. With this evidence, we show how understanding these trajectories of cell counts may be factored into a simple model to quantify changes in inhibitory signalling across the course of life, which may be applied as a framework for creating more advanced simulations of interneuronal implication in normal cerebral processing, normal aging, or pathological processes.
Mowery, Todd M; Kotak, Vibhakar C; Sanes, Dan H
Sensory deprivation can induce profound changes to central processing during developmental critical periods (CPs), and the recovery of normal function is maximal if the sensory input is restored during these epochs. Therefore, we asked whether mild and transient hearing loss (HL) during discrete CPs could induce changes to cortical cellular physiology. Electrical and inhibitory synaptic properties were obtained from auditory cortex pyramidal neurons using whole-cell recordings after bilateral earplug insertion or following earplug removal. Varying the age of HL onset revealed brief CPs of vulnerability for membrane and firing properties, as well as, inhibitory synaptic currents. These CPs closed 1 week after ear canal opening on postnatal day (P) 18. To examine whether the cellular properties could recover from HL, earplugs were removed prior to (P17) or after (P23), the closure of these CPs. The earlier age of hearing restoration led to greater recovery of cellular function, but firing rate remained disrupted. When earplugs were removed after the closure of these CPs, several changes persisted into adulthood. Therefore, long-lasting cellular deficits that emerge from transient deprivation during a CP may contribute to delayed acquisition of auditory skills in children who experience temporary HL. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Full Text Available Aging is often accompanied by hearing loss, which impacts how sounds are processed and represented along the ascending auditory pathways and within the auditory cortices. Here, we assess the impact of mild binaural hearing loss on the older adults’ ability to both process complex sounds embedded in noise and segregate a mistuned harmonic in an otherwise periodic stimulus. We measured auditory evoked fields (AEFs using magnetoencephalography while participants were presented with complex tones that had either all harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. The tones (75 dB sound pressure level, SPL were presented without, with low (45 dBA, SPL or with moderate (65 dBA SPL Gaussian noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane. We then examined the effects of hearing loss and noise on the amplitude and latency of the resulting source waveforms. In the present study, results revealed that similar noise-induced increases in N1m were present in older adults with and without hearing loss. Our results also showed that the P1m amplitude was larger in the hearing impaired than normal-hearing adults. In addition, the object-related negativity (ORN elicited by the mistuned harmonic was larger in hearing impaired listeners. The enhanced P1m and ORN amplitude in the hearing impaired older adults suggests that hearing loss increased neural excitability in auditory cortices, which could be related to deficits in inhibitory control.
zebra finch auditory forebrain in response to random tone sequences and bird songs, and used the STRF from one stimulus to predict the responses to the...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing...and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information
Elena V Orekhova
Full Text Available Auditory sensory modulation difficulties are common in autism spectrum disorders (ASD and may stem from a faulty arousal system that compromises the ability to regulate an optimal response. To study neurophysiological correlates of the sensory modulation difficulties, we recorded magnetic field responses to clicks in 14 ASD and 15 typically developing (TD children. We further analyzed the P100m, which is the most prominent component of the auditory magnetic field response in children and may reflect preattentive arousal processes. The P100m was rightward lateralized in the TD, but not in the ASD children, who showed a tendency toward P100m reduction in the right hemisphere (RH. The atypical P100m lateralization in the ASD subjects was associated with greater severity of sensory abnormalities assessed by Short Sensory Profile, as well as with auditory hypersensitivity during the first two years of life. The absence of right-hemispheric predominance of the P100m and a tendency for its right-hemispheric reduction in the ASD children suggests disturbance of the RH ascending reticular brainstem pathways and/or their thalamic and cortical projections, which in turn may contribute to abnormal arousal and attention. The correlation of sensory abnormalities with atypical, more leftward, P100m lateralization suggests that reduced preattentive processing in the right hemisphere and/or its shift to the left hemisphere may contribute to abnormal sensory behavior in ASD.
Orekhova, Elena V; Tsetlin, Marina M; Butorina, Anna V; Novikova, Svetlana I; Gratchev, Vitaliy V; Sokolov, Pavel A; Elam, Mikael; Stroganova, Tatiana A
Auditory sensory modulation difficulties are common in autism spectrum disorders (ASD) and may stem from a faulty arousal system that compromises the ability to regulate an optimal response. To study neurophysiological correlates of the sensory modulation difficulties, we recorded magnetic field responses to clicks in 14 ASD and 15 typically developing (TD) children. We further analyzed the P100m, which is the most prominent component of the auditory magnetic field response in children and may reflect preattentive arousal processes. The P100m was rightward lateralized in the TD, but not in the ASD children, who showed a tendency toward P100m reduction in the right hemisphere (RH). The atypical P100m lateralization in the ASD subjects was associated with greater severity of sensory abnormalities assessed by Short Sensory Profile, as well as with auditory hypersensitivity during the first two years of life. The absence of right-hemispheric predominance of the P100m and a tendency for its right-hemispheric reduction in the ASD children suggests disturbance of the RH ascending reticular brainstem pathways and/or their thalamic and cortical projections, which in turn may contribute to abnormal arousal and attention. The correlation of sensory abnormalities with atypical, more leftward, P100m lateralization suggests that reduced preattentive processing in the right hemisphere and/or its shift to the left hemisphere may contribute to abnormal sensory behavior in ASD.
Mohsen Parto Dezfouli
Full Text Available Repeated stimulus causes a specific suppression of neuronal responses, which is so-called as Stimulus-Specific Adaptation (SSA. This effect can be recovered when the stimulus changes. In the auditory system SSA is a well-known phenomenon that appears at different levels of the mammalian auditory pathway. In this study, we explored the effects of adaptation to a particular stimulus on the auditory tuning curves of anesthetized rats. We used two sequences and compared the responses of each tone combination in these two conditions. First sequence consists of different pure tone combinations that were presented randomly. In the second one, the same stimuli of the first sequence were presented in the context of an adapted stimulus (adapter that occupied 80% of sequence probability. The population results demonstrated that the adaptation factor decreased the frequency response area and made a change in the tuning curve to shift it unevenly toward the higher thresholds of tones. The local field potentials and multi-unit activity responses have indicated that the neural activities strength of the adapted frequency has been suppressed as well as with lower suppression in neighboring frequencies. This aforementioned reduction changed the characteristic frequency of the tuning curve.
Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.
Full Text Available Abstract Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content, and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization, while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization. Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG, and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.
Heilbron, Micha; Chait, Maria
Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Leo L Lui
Full Text Available Interaural level differences (ILDs are the dominant cue for localizing the sources of high frequency sounds that differ in azimuth. Neurons in the primary auditory cortex (A1 respond differentially to ILDs of simple stimuli such as tones and noise bands, but the extent to which this applies to complex natural sounds, such as vocalizations, is not known. In sufentanil/N2O anaesthetized marmosets, we compared the responses of 76 A1 neurons to three vocalizations (Ock, Tsik and Twitter and pure tones at cells’ characteristic frequency. Each stimulus was presented with ILDs ranging from 20dB favouring the contralateral ear to 20dB favouring the ipsilateral ear to cover most of the frontal azimuthal space. The response to each stimulus was tested at three average binaural levels (ABLs. Most neurons were sensitive to ILDs of vocalizations and pure tones. For all stimuli, the majority of cells had monotonic ILD sensitivity functions favouring the contralateral ear, but we also observed ILD sensitivity functions that peaked near the midline and functions favouring the ipsilateral ear. Representation of ILD in A1 was better for pure tones and the Ock vocalization in comparison to the Tsik and Twitter calls; this was reflected by higher discrimination indices and greater modulation ranges. ILD sensitivity was heavily dependent on ABL: changes in ABL by ±20 dB SPL from the optimal level for ILD sensitivity led to significant decreases in ILD sensitivity for all stimuli, although ILD sensitivity to pure tones and Ock calls was most robust to such ABL changes. Our results demonstrate differences in ILD coding for pure tones and vocalizations, showing that ILD sensitivity in A1 to complex sounds cannot be simply extrapolated from that to pure tones. They also show A1 neurons do not show level-invariant representation of ILD, suggesting that such a representation of auditory space is likely to require population coding, and further processing at subsequent
Bidelman, Gavin M; Weiss, Michael W; Moreno, Sylvain; Alain, Claude
Musicianship is associated with neuroplastic changes in brainstem and cortical structures, as well as improved acuity for behaviorally relevant sounds including speech. However, further advance in the field depends on characterizing how neuroplastic changes in brainstem and cortical speech processing relate to one another and to speech-listening behaviors. Here, we show that subcortical and cortical neural plasticity interact to yield the linguistic advantages observed with musicianship. We compared brainstem and cortical neuroelectric responses elicited by a series of vowels that differed along a categorical speech continuum in amateur musicians and non-musicians. Musicians obtained steeper identification functions and classified speech sounds more rapidly than non-musicians. Behavioral advantages coincided with more robust and temporally coherent brainstem phase-locking to salient speech cues (voice pitch and formant information) coupled with increased amplitude in cortical-evoked responses, implying an overall enhancement in the nervous system's responsiveness to speech. Musicians' subcortical and cortical neural enhancements (but not behavioral measures) were correlated with their years of formal music training. Associations between multi-level neural responses were also stronger in musically trained listeners, and were better predictors of speech perception than in non-musicians. Results suggest that musicianship modulates speech representations at multiple tiers of the auditory pathway, and strengthens the correspondence of processing between subcortical and cortical areas to allow neural activity to carry more behaviorally relevant information. We infer that musicians have a refined hierarchy of internalized representations for auditory objects at both pre-attentive and attentive levels that supplies more faithful phonemic templates to decision mechanisms governing linguistic operations. © 2014 Federation of European Neuroscience Societies and John Wiley
Full Text Available The prevalence of tinnitus is known to increase with age. The age-dependent mechanisms of tinnitus may have important implications for the development of new therapeutic treatments. High doses of salicylate can be used experimentally to induce transient tinnitus and hearing loss. Although accumulating evidence indicates that salicylate induces tinnitus by directly targeting neurons in the peripheral and central auditory systems, the precise effect of salicylate on neural networks in the auditory cortex (AC is unknown. Here, we examined salicylate-induced changes in stimulus-driven laminar responses of AC slices with salicylate superfusion in young and aged senescence-accelerated-prone (SAMP and -resistant (SAMR mice. Of the two strains, SAMP1 is known to be a more suitable model of presbycusis. We recorded stimulus-driven laminar local field potential (LFP responses at multi sites in AC slice preparations. We found that for all AC slices in the two strains, salicylate always reduced stimulus-driven LFP responses in all layers. However, for the amplitudes of the LFP responses, the two senescence-accelerated mice (SAM strains showed different laminar properties between the pre- and post-salicylate conditions, reflecting strain-related differences in local circuits. As for the relationships between auditory brainstem response (ABR thresholds and the LFP amplitude ratios in the pre- vs. post-salicylate condition, we found negative correlations in layers 2/3 and 4 for both older strains, and in layer 5 (L5 in older SAMR1. In contrast, the GABAergic agonist muscimol (MSC led to positive correlations between ABR thresholds and LFP amplitude ratios in the pre- vs. post-MSC condition in younger SAM mice from both strains. Further, in younger mice, salicylate decreased the firing rate in AC L4 pyramidal neurons. Thus, salicylate can directly reduce neural excitability of L4 pyramidal neurons and thereby influence AC neural circuit activity. That we
Gourévitch, Boris; Edeline, Jean-Marc
Elderly people often show degraded hearing performance and have difficulties in understanding speech, particularly in noisy environments. Although loss in peripheral hearing sensitivity is an important factor in explaining these low performances, central alterations also have an impact but their exact contributions remained unclear. In this study, we focus on the functional effects of aging on auditory cortex responses. Neuronal discharges and local field potentials were recorded in the auditory cortex of aged guinea pigs (> 3 years), and several parameters characterizing the processing of auditory information were quantified: the acoustic thresholds, response strength, latency and duration of the response, and breadth of tuning. Several of these parameters were also quantified from auditory brainstem responses collected from the same animals, and recordings obtained from a population of animals with trauma-induced hearing loss were also included in this study. The results showed that aging and acoustic trauma reduced the response strength at both brainstem and cortical levels, and increased the response latencies more at the cortical level than at the brainstem level. In addition to the brainstem hearing loss, aging induced a 'cortical hearing loss' as judged by additive changes in the threshold and frequency response seen in the cortex. It also increased the duration of neural responses and reduced the receptive field bandwidth, effects that were not found in traumatized animals. These effects substantiate the notion that presbycusis involves both peripheral hearing loss and biological aging in the central auditory system. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Nourski, Kirill V; Steinschneider, Mitchell; Rhone, Ariane E; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob
High gamma power has become the principal means of assessing auditory cortical activation in human intracranial studies, albeit at the expense of low frequency local field potentials (LFPs). It is unclear whether limiting analyses to high gamma impedes ability of clarifying auditory cortical organization. We compared the two measures obtained from posterolateral superior temporal gyrus (PLST) and evaluated their relative utility in sound categorization. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli (consonant-vowel syllables varying in voicing and place of articulation and control tones) elicited robust evoked potentials and high gamma activity on PLST. LFPs had greater across-subject variability, yet yielded higher classification accuracy, relative to high gamma power. Classification was enhanced by including temporal detail of LFPs and combining LFP and high gamma. We conclude that future studies should consider utilizing both LFP and high gamma when investigating the functional organization of human auditory cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
Klostermann, Ellen C; Loui, Psyche; Shimamura, Arthur P
In neuroimaging studies, the left ventral posterior parietal cortex (PPC) is particularly active during memory retrieval. However, most studies have used verbal or verbalizable stimuli. We investigated neural activations associated with the retrieval of short, agrammatical music stimuli (Blackwood, 2004), which have been largely associated with right hemisphere processing. At study, participants listened to music stimuli and rated them on pleasantness. At test, participants made old/new recognition judgments with high/low confidence ratings. Right, but not left, ventral PPC activity was observed during the retrieval of these music stimuli. Thus, rather than indicating a special status of left PPC in retrieval, both right and left ventral PPC participate in memory retrieval, depending on the type of information that is to be remembered.
Schmitz, Judith; Bartoli, Eleonora; Maffongelli, Laura; Fadiga, Luciano; Sebastian-Galles, Nuria; D'Ausilio, Alessandro
Listening to speech has been shown to activate motor regions, as measured by corticobulbar excitability. In this experiment, we explored if motor regions are also recruited during listening to non-native speech, for which we lack both sensory and motor experience. By administering Transcranial Magnetic Stimulation (TMS) over the left motor cortex we recorded corticobulbar excitability of the lip muscles when Italian participants listened to native-like and non-native German vowels. Results showed that lip corticobulbar excitability increased for a combination of lip use during articulation and non-nativeness of the vowels. Lip corticobulbar excitability was further related to measures obtained in perception and production tasks showing a negative relationship with nativeness ratings and a positive relationship with the uncertainty of lip movement during production of the vowels. These results suggest an active and compensatory role of the motor system during listening to perceptually/articulatory unfamiliar phonemes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Full Text Available The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard. If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%, the entire subjects showed the right-hemispheric lateralization: This is due to the subjects’ effort to hear the music in the presence of noises. However, too much noise has reduced the subjects’ discerning efforts.
Hutsler, Jeffrey J
Functional lateralization of language within the cerebral cortex has long driven the search for structural asymmetries that might underlie language asymmetries. Most examinations of structural asymmetry have focused upon the gross size and shape of cortical regions in and around language areas. In the last 20 years several labs have begun to document microanatomical asymmetries in the structure of language-associated cortical regions. Such microanatomic results provide useful constraints and clues to our understanding of the biological bases of language specialization in the cortex. In a previous study we documented asymmetries in the size of a specific class of pyramidal cells in the superficial cortical layers. The present work uses a nonspecific stain for cell bodies to demonstrate the presence of an asymmetry in layer III pyramidal cell sizes within auditory, secondary auditory and language-associated regions of the temporal lobes. Specifically, the left hemisphere contains a greater number of the largest pyramidal cells, those that are thought to be the origin of long-range cortico-cortical connections. These results are discussed in the context of cortical columns and how such an asymmetry might alter cortical processing. These findings, in conjunction with other asymmetries in cortical organization that have been documented within several labs, clearly demonstrate that the columnar and connective structure of auditory and language cortex in the left hemisphere is distinct from homotopic regions in the contralateral hemisphere.
Hechavarría, Julio C; Beetz, M Jerome; Macias, Silvio; Kössl, Manfred
The mechanisms by which the mammalian brain copes with information from natural vocalization streams remain poorly understood. This article shows that in highly vocal animals, such as the bat species Carollia perspicillata, the spike activity of auditory cortex neurons does not track the temporal information flow enclosed in fast time-varying vocalization streams emitted by conspecifics. For example, leading syllables of so-called distress sequences (produced by bats subjected to duress) suppress cortical spiking to lagging syllables. Local fields potentials (LFPs) recorded simultaneously to cortical spiking evoked by distress sequences carry multiplexed information, with response suppression occurring in low frequency LFPs (i.e. 2-15 Hz) and steady-state LFPs occurring at frequencies that match the rate of energy fluctuations in the incoming sound streams (i.e. >50 Hz). Such steady-state LFPs could reflect underlying synaptic activity that does not necessarily lead to cortical spiking in response to natural fast time-varying vocal sequences.
Full Text Available Abstract Background Little is known about the contribution of transcranial direct current stimulation (tDCS to the exploration of memory functions. The aim of the present study was to examine the behavioural effects of right or left-hemisphere frontal direct current delivery while committing to memory auditory presented nouns on short-term learning and subsequent long-term retrieval. Methods Twenty subjects, divided into two groups, performed an episodic verbal memory task during anodal, cathodal and sham current application on the right or left dorsolateral prefrontal cortex (DLPFC. Results Our results imply that only cathodal tDCS elicits behavioural effects on verbal memory performance. In particular, left-sided application of cathodal tDCS impaired short-term verbal learning when compared to the baseline. We did not observe tDCS effects on long-term retrieval. Conclusion Our results imply that the left DLPFC is a crucial area involved in short-term verbal learning mechanisms. However, we found further support that direct current delivery with an intensity of 1.5 mA to the DLPFC during short-term learning does not disrupt longer lasting consolidation processes that are mainly known to be related to mesial temporal lobe areas. In the present study, we have shown that the tDCS technique has the potential to modulate short-term verbal learning mechanism.
Ikeda, Kohei; Higashi, Toshio; Sugawara, Kenichi; Tomori, Kounosuke; Kinoshita, Hiroshi; Kasai, Tatsuya
The effect of visual and auditory enhancements of finger movement on corticospinal excitability during motor imagery (MI) was investigated using the transcranial magnetic stimulation technique. Motor-evoked potentials were elicited from the abductor digit minimi muscle during MI with auditory, visual and, auditory and visual information, and no…
Full Text Available Age-related dysfunction of the central auditory system, also known as central presbycusis, can affect speech perception and sound localization. Understanding the pathogenesis of central presbycusis will help to develop novel approaches to prevent or treat this disease. In this study, the mechanisms of central presbycusis were investigated using a mimetic aging rat model induced by chronic injection of D-galactose (D-Gal. We showed that malondialdehyde (MDA levels were increased and manganese superoxide dismutase (SOD2 activity was reduced in the auditory cortex in natural aging and D-Gal-induced mimetic aging rats. Furthermore, mitochondrial DNA (mtDNA 4834 bp deletion, abnormal ultrastructure and cell apoptosis in the auditory cortex were also found in natural aging and D-Gal mimetic aging rats. Sirt3, a mitochondrial NAD+-dependent deacetylase, has been shown to play a crucial role in controlling cellular reactive oxygen species (ROS homeostasis. However, the role of Sirt3 in the pathogenesis of age-related central auditory cortex deterioration is still unclear. Here, we showed that decreased Sirt3 expression might be associated with increased SOD2 acetylation, which negatively regulates SOD2 activity. Oxidative stress accumulation was likely the result of low SOD2 activity and a decline in ROS clearance. Our findings indicate that Sirt3 might play an essential role, via the mediation of SOD2, in central presbycusis and that manipulation of Sirt3 expression might provide a new approach to combat aging and oxidative stress-related diseases.
Pienkowski, Martin; Eggermont, Jos J
The effects of nonlinear interactions between different sound frequencies on the responses of neurons in primary auditory cortex (AI) have only been investigated using two-tone paradigms. Here we stimulated with relatively dense, Poisson-distributed trains of tone pips (with frequency ranges spanning five octaves, 16 frequencies /octave, and mean rates of 20 or 120 pips /s), and examined within-frequency (or auto-frequency) and cross-frequency interactions in three types of AI unit responses by computing second-order "Poisson-Wiener" auto- and cross-kernels. Units were classified on the basis of their spectrotemporal receptive fields (STRFs) as "double-peaked", "single-peaked" or "peak-valley". Second-order interactions were investigated between the two bands of excitatory frequencies on double-peaked STRFs, between an excitatory band and various non-excitatory bands on single-peaked STRFs, and between an excitatory band and an inhibitory sideband on peak-valley STRFs. We found that auto-frequency interactions (i.e., those within a single excitatory band) were always characterized by a strong depression of (first-order) excitation that decayed with the interstimulus lag up to approximately 200 ms. That depression was weaker in cross-frequency compared to auto-frequency interactions for approximately 25% of dual-peaked STRFs, evidence of "combination sensitivity" for the two bands. Non-excitatory and inhibitory frequencies (on single-peaked and peak-valley STRFs, respectively) typically weakly depressed the excitatory response at short interstimulus lags (interactions with inhibitory frequencies rather than just non-excitatory ones. Finally, facilitation in single-peaked and peak-valley units decreased with increasing stimulus density. Our results indicate that the strong combination sensitivity and cross-frequency facilitation suggested by previous two-tone-paradigm studies are much less pronounced when using more temporally-dense stimuli.
Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel
Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Olshansky, Michael P; Bar, Rachel J; Fogarty, Mary; DeSouza, Joseph F X
The current study used functional magnetic resonance imaging to examine the neural activity of an expert dancer with 35 years of break-dancing experience during the kinesthetic motor imagery (KMI) of dance accompanied by highly familiar and unfamiliar music. The goal of this study was to examine the effect of musical familiarity on neural activity underlying KMI within a highly experienced dancer. In order to investigate this in both primary sensory and motor planning cortical areas, we examined the effects of music familiarity on the primary auditory cortex [Heschl's gyrus (HG)] and the supplementary motor area (SMA). Our findings reveal reduced HG activity and greater SMA activity during imagined dance to familiar music compared to unfamiliar music. We propose that one's internal representations of dance moves are influenced by auditory stimuli and may be specific to a dance style and the music accompanying it.
Lau, Condon; Zhang, Jevin W; McPherson, Bradley; Pienkowski, Martin; Wu, Ed X
Exposure to loud sounds can lead to permanent hearing loss, i.e., the elevation of hearing thresholds. Exposure at more moderate sound pressure levels (SPLs) (non-traumatic and within occupational limits) may not elevate thresholds, but could in the long-term be detrimental to speech intelligibility by altering its spectrotemporal representation in the central auditory system. In support of this, electrophysiological and behavioral changes following long-term, passive (no conditioned learning) exposure at moderate SPLs have recently been observed in adult animals. To assess the potential effects of moderately loud noise on the entire auditory brain, we employed functional magnetic resonance imaging (fMRI) to study noise-exposed adult rats. We find that passive, pulsed broadband noise exposure for two months at 65 dB SPL leads to a decrease of the sound-evoked blood oxygenation level-dependent fMRI signal in the thalamic medial geniculate body (MGB) and in the auditory cortex (AC). This points to the thalamo-cortex as the site of the neural adaptation to the moderately noisy environment. The signal reduction is statistically significant during 10 Hz pulsed acoustic stimulation (MGB: pnoise exposure has a greater effect on the processing of higher pulse rate sounds. This study has enhanced our understanding of functional changes following exposure by mapping changes across the entire auditory brain. These findings have important implications for speech processing, which depends on accurate processing of sounds with a wide spectrum of pulse rates. Copyright © 2014 Elsevier Inc. All rights reserved.
Suresh, Chandan H; Krishnan, Ananthanarayan; Gandour, Jackson T
Long-term experience enhances neural representation of temporal attributes of pitch in the brainstem and auditory cortex in favorable listening conditions. Herein we examine whether cortical pitch mechanisms shaped by language experience are more resilient to degradation in background noise, and exhibit greater binaural release from masking (BRM). Cortical pitch responses (CPR) were recorded from Mandarin- and English-speaking natives using a Mandarin word exhibiting a high rising pitch (/yi2/). Stimuli were presented diotically in Quiet, and in noise at +5, and 0 dB SNR. CPRs were also recorded in binaural conditions, SONO (where signal and noise were in phase at both ears); or S0Nπ (where signal was in phase and noise 180° out of phase at each ear), using 0 dB SNR. At Fz, both groups showed increase in CPR peak latency and decrease in amplitude with increasing noise level. A language-dependent enhancement of Na-Pb amplitude (Chinese > English) was restricted to Quiet and +5 dB SNR conditions. At T7/T8 electrode sites, Chinese natives exhibited a rightward asymmetry for both CPR components. A language-dependent effect (Chinese > English) was restricted to T8. Regarding BRM, both CPR components showed greater response amplitude for the S0Nπ condition compared to S0N0 across groups. Rightward asymmetry for BRM in the Chinese group indicates experience-dependent recruitment of right auditory cortex. Restriction of the advantage in pitch representation to the quiet and +5 SNR conditions, and the absence of group differences in the binaural release from masking, suggest that language experience affords limited advantage in the neural representation of pitch-relevant information in the auditory cortex under adverse listening conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Anomal, Renata; de Villers-Sidani, Etienne; Merzenich, Michael M; Panizzutti, Rogerio
Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1), the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF), which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.
Full Text Available Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1, the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF, which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.
Sevy, Alexander B G; Bortfeld, Heather; Huppert, Theodore J; Beauchamp, Michael S; Tonini, Ross E; Oghalai, John S
Cochlear implants (CI) are commonly used to treat deafness in young children. While many factors influence the ability of a deaf child who is hearing through a CI to develop speech and language skills, an important factor is that the CI has to stimulate the auditory cortex. Obtaining behavioral measurements from young children with CIs can often be unreliable. While a variety of noninvasive techniques can be used for detecting cortical activity in response to auditory stimuli, many have critical limitations when applied to the pediatric CI population. We tested the ability of near-infrared spectroscopy (NIRS) to detect cortical responses to speech stimuli in pediatric CI users. Neuronal activity leads to changes in blood oxy- and deoxy-hemoglobin concentrations that can be detected by measuring the transmission of near-infrared light through the tissue. To verify the efficacy of NIRS, we first compared auditory cortex responses measured with NIRS and fMRI in normal-hearing adults. We then examined four different participant cohorts with NIRS alone. Speech-evoked cortical activity was observed in 100% of normal-hearing adults (11 of 11), 82% of normal-hearing children (9 of 11), 78% of deaf children who have used a CI > 4 months (28 of 36), and 78% of deaf children who completed NIRS testing on the day of CI initial activation (7 of 9). Therefore, NIRS can measure cortical responses in pediatric CI users, and has the potential to be a powerful adjunct to current CI assessment tools. Copyright © 2010 Elsevier B.V. All rights reserved.
Kumar, Vivek; Nag, Tapas Chandra; Sharma, Uma; Mewar, Sujeet; Jagannathan, Naranamangalam R; Wadhwa, Shashi
Proper functional development of the auditory cortex (ACx) critically depends on early relevant sensory experiences. Exposure to high intensity noise (industrial/traffic) and music, a current public health concern, may disrupt the proper development of the ACx and associated behavior. The biochemical mechanisms associated with such activity dependent changes during development are poorly understood. Here we report the effects of prenatal chronic (last 10 days of incubation), 110dB sound pressure level (SPL) music and noise exposure on metabolic profile of the auditory cortex analogue/field L (AuL) in domestic chicks. Perchloric acid extracts of AuL of post hatch day 1 chicks from control, music and noise groups were subjected to high resolution (700MHz) (1)H NMR spectroscopy. Multivariate regression analysis of the concentration data of 18 metabolites revealed a significant class separation between control and loud sound exposed groups, indicating a metabolic perturbation. Comparison of absolute concentration of metabolites showed that overstimulation with loud sound, independent of spectral characteristics (music or noise) led to extensive usage of major energy metabolites, e.g., glucose, β-hydroxybutyrate and ATP. On the other hand, high glutamine levels and sustained levels of neuromodulators and alternate energy sources, e.g., creatine, ascorbate and lactate indicated a systems restorative measure in a condition of neuronal hyperactivity. At the same time, decreased aspartate and taurine levels in the noise group suggested a differential impact of prenatal chronic loud noise over music exposure. Thus prenatal exposure to loud sound especially noise alters the metabolic activity in the AuL which in turn can affect the functional development and later auditory associated behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.
Proverbio, A M; De Benedetto, F
The aim of the present study was to investigate how auditory background interacts with learning and memory. Both facilitatory (e.g., "Mozart effect") and interfering effects of background have been reported, depending on the type of auditory stimulation and of concurrent cognitive tasks. Here we recorded event related potentials (ERPs) during face encoding followed by an old/new memory test to investigate the effect of listening to classical music (Čajkovskij, dramatic), environmental sounds (rain) or silence on learning. Participants were 15 healthy non-musician university students. Almost 400 (previously unknown) faces of women and men of various age were presented. Listening to music during study led to a better encoding of faces as indexed by an increased Anterior Negativity. The FN400 response recorded during the memory test showed a gradient in its amplitude reflecting face familiarity. FN400 was larger to new than old faces, and to faces studied during rain sound listening and silence than music listening. The results indicate that listening to music enhances memory recollection of faces by merging with visual information. A swLORETA analysis showed the main involvement of Superior Temporal Gyrus (STG) and medial frontal gyrus in the integration of audio-visual information. Copyright © 2017 Elsevier B.V. All rights reserved.
Curcic-Blake, Branisalava; Bais, Leonie; Sibeijn-Kuiper, Anita; Pijnenborg, Hendrika Maria; Knegtering, Henderikus; Liemburg, Edith; Aleman, André
Purpose: Glutamatergic models of psychosis propose that dysfunction of N-methyl-D-aspartate (NMDA) receptors, and associated excess of glutamate, may underlie psychotic experiences in people with schizophrenia. However, little is known about the specific relation between glutamate and auditory
Lassen, N A; Friberg, L
Specific types of brain activity as sensory perception auditory, somato-sensory or visual -or the performance of movements are accompanied by increases of blood flow and oxygen consumption in the cortical areas involved with performing the respective tasks. The activation patterns observed by mea...
Markus K Schaefer
Full Text Available In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA as well as local field potentials (LFP, and current source density (CSD waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of "call-specificity" in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns could indeed be important for encoding sounds that differ in their acoustic attributes.
Krishnan, A; Gandour, J T; Suresh, C H
The aim of this study is to determine how pitch acceleration rates within and outside the normal pitch range may influence latency and amplitude of cortical pitch-specific responses (CPR) as a function of language experience (Chinese, English). Responses were elicited from a set of four pitch stimuli chosen to represent a range of acceleration rates (two each inside and outside the normal voice range) imposed on the high rising Mandarin Tone 2. Pitch-relevant neural activity, as reflected in the latency and amplitude of scalp-recorded CPR components, varied depending on language-experience and pitch acceleration of dynamic, time-varying pitch contours. Peak latencies of CPR components were shorter in the Chinese than the English group across stimuli. Chinese participants showed greater amplitude than English for CPR components at both frontocentral and temporal electrode sites in response to pitch contours with acceleration rates inside the normal voice pitch range as compared to pitch contours with acceleration rates that exceed the normal range. As indexed by CPR amplitude at the temporal sites, a rightward asymmetry was observed for the Chinese group only. Only over the right temporal site was amplitude greater in the Chinese group relative to the English. These findings may suggest that the neural mechanism(s) underlying processing of pitch in the right auditory cortex reflect experience-dependent modulation of sensitivity to acceleration in just those rising pitch contours that fall within the bounds of one's native language. More broadly, enhancement of native pitch stimuli and stronger rightward asymmetry of CPR components in the Chinese group is consistent with the notion that long-term experience shapes adaptive, distributed hierarchical pitch processing in the auditory cortex, and reflects an interaction with higher order, extrasensory processes beyond the sensory memory trace. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Osanai, Hisayuki; Minusa, Shunsuke; Tateno, Takashi
Magnetic stimulation is widely used in neuroscience research and clinical treatment. Despite recent progress in understanding the neural modulation mechanism of conventional magnetic stimulation methods, the physiological mechanism at the cortical microcircuit level is not well understood due to the poor stimulation focality and large electric artifact in the recording. To overcome these issues, we used a sub-millimeter-sized coil (micro-coil) to stimulate the mouse auditory cortex in vivo. To determine the mechanism, we conducted the first direct electrophysiological recording of micro-coil-driven neural responses at multiple sites on the horizontal surface and laminar areas of the auditory cortex. The laminar responses of local field potentials (LFPs) to the magnetic stimulation reached layer 6, and the spatiotemporal profiles were very similar to those of the acoustic stimulation, suggesting the activation of the same cortical microcircuit. The horizontal LFP responses to the magnetic stimulation were evoked within a millimeter-wide area around the stimulation coil. The activated cortical area was dependent on the coil orientation, providing useful information on the effective position of the coil relative to the brain surface for modulating cortical circuitry activity. In addition, numerical calculation of the induced electric field in the brain revealed that the inhomogeneity of the horizontal electric field to the surface is critical for micro-coil-induced cortical activation. The results suggest that our micro-coil technique has the potential to be used as a chronic, less-invasive and highly focal neuro-stimulator, and is useful for investigating microcircuit responses to magnetic stimulation for clinical treatment. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz
Here, we applied a multi-feature mismatch negativity (MMN) paradigm in order to systematically investigate the neuronal representation of vowels and temporally manipulated CV syllables in a homogeneous sample of string players and non-musicians. Based on previous work indicating an increased sensitivity of the musicians' auditory system, we expected to find that musically trained subjects will elicit increased MMN amplitudes in response to temporal variations in CV syllables, namely voice-onset time (VOT) and duration. In addition, since different vowels are principally distinguished by means of frequency information and musicians are superior in extracting tonal (and thus frequency) information from an acoustic stream, we also expected to provide evidence for an increased auditory representation of vowels in the experts. In line with our hypothesis, we could show that musicians are not only advantaged in the pre-attentive encoding of temporal speech cues, but most notably also in processing vowels. Additional "just noticeable difference" measurements suggested that the musicians' perceptual advantage in encoding speech sounds was more likely driven by the generic constitutional properties of a highly trained auditory system, rather than by its specialisation for speech representations per se. These results shed light on the origin of the often reported advantage of musicians in processing a variety of speech sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
Early continuous white noise exposure alters l-alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunit glutamate receptor 2 and gamma-aminobutyric acid type a receptor subunit beta3 protein expression in rat auditory cortex.
Xu, Jinghong; Yu, Liping; Zhang, Jiping; Cai, Rui; Sun, Xinde
Auditory experience during the postnatal critical period is essential for the normal maturation of auditory function. Previous studies have shown that rearing infant rat pups under conditions of continuous moderate-level noise delayed the emergence of adult-like topographic representational order and the refinement of response selectivity in the primary auditory cortex (A1) beyond normal developmental benchmarks and indefinitely blocked the closure of a brief, critical-period window. To gain insight into the molecular mechanisms of these physiological changes after noise rearing, we studied expression of the AMPA receptor subunit GluR2 and GABA(A) receptor subunit beta3 in the auditory cortex after noise rearing. Our results show that continuous moderate-level noise rearing during the early stages of development decreases the expression levels of GluR2 and GABA(A)beta3. Furthermore, noise rearing also induced a significant decrease in the level of GABA(A) receptors relative to AMPA receptors. However, in adult rats, noise rearing did not have significant effects on GluR2 and GABA(A)beta3 expression or the ratio between the two units. These changes could have a role in the cellular mechanisms involved in the delayed maturation of auditory receptive field structure and topographic organization of A1 after noise rearing. Copyright 2009 Wiley-Liss, Inc.
Differential effects of prenatal chronic high-decibel noise and music exposure on the excitatory and inhibitory synaptic components of the auditory cortex analog in developing chicks (Gallus gallus domesticus).
Kumar, V; Nag, T C; Sharma, U; Jagannathan, N R; Wadhwa, S
Proper development of the auditory cortex depends on early acoustic experience that modulates the balance between excitatory and inhibitory (E/I) circuits. In the present social and occupational environment exposure to chronic loud sound in the form of occupational or recreational noise, is becoming inevitable. This could especially disrupt the functional auditory cortex development leading to altered processing of complex sound and hearing impairment. Here we report the effects of prenatal chronic loud sound (110-dB sound pressure level (SPL)) exposure (rhythmic [music] and arrhythmic [noise] forms) on the molecular components involved in regulation of the E/I balance in the developing auditory cortex analog/Field L (AuL) in domestic chicks. Noise exposure at 110-dB SPL significantly enhanced the E/I ratio (increased expression of AMPA receptor GluR2 subunit and glutamate with decreased expression of GABA(A) receptor gamma 2 subunit and GABA), whereas loud music exposure maintained the E/I ratio. Expressions of markers of synaptogenesis, synaptic stability and plasticity i.e., synaptophysin, PSD-95 and gephyrin were reduced with noise but increased with music exposure. Thus our results showed differential effects of prenatal chronic loud noise and music exposures on the E/I balance and synaptic function and stability in the developing auditory cortex. Loud music exposure showed an overall enrichment effect whereas loud noise-induced significant alterations in E/I balance could later impact the auditory function and associated cognitive behavior. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Full Text Available Over the last decade, the consequences of acoustic trauma on the functional properties of auditory cortex neurons have received growing attention. Changes in spontaneous and evoked activity, shifts of characteristic frequency (CF, and map reorganizations have extensively been described in anesthetized animals (e.g., Norena and Eggermont, 2003, 2005. Here, we examined how the functional properties of cortical cells are modified after partial hearing loss in awake guinea pigs. Single unit activity was chronically recorded in awake, restrained, guinea pigs from three days before up to 15 days after an acoustic trauma induced by a 5kHz 110dB tone delivered for 1h. Auditory brainstem responses (ABRs audiograms indicated that these parameters produced a mean ABR threshold shift of 20dB SPL at, and one octave above, the trauma frequency. When tested with pure tones, cortical cells showed on average a 25dB increase in threshold at CF the day following the trauma. Over days, this increase progressively stabilized at only 10dB above control value indicating a progressive recovery of cortical thresholds, probably reflecting a progressive shift from temporary threshold shift (TTS to permanent threshold shift (PTS. There was an increase in response latency and in response variability the day following the trauma but these parameters returned to control values within three days. When tested with conspecific vocalizations, cortical neurons also displayed an increase in response latency and in response duration the day after the acoustic trauma, but there was no effect on the average firing rate elicited by the vocalization. These findings suggest that, in cases of moderate hearing loss, the temporal precision of neuronal responses to natural stimuli is impaired despite the fact the firing rate showed little or no changes.
Aizawa, Naotaka; Eggermont, Jos J
Here we show that mild hearing loss induced by noise exposure in early age causes a decrease in neural temporal resolution when measured in adulthood. We investigated the effect of this chronic hearing loss on the representation of a voice onset time (VOT) and a gap-duration continuum in primary auditory cortex (AI) in cats, which were exposed at the age of 6 weeks to a 120-dB SPL, 5-kHz 1/3 octave noise band for 2 h. The resulting hearing loss measured using auditory brainstem responses and cortical multiunit thresholds at 4-6 months of age was 20-40 dB between 1 and 32 kHz. Multiple single-unit activity was recorded in seven noise-exposed cats and nine control cats related to the presentation of a/ba/-/pa/ continuum in which VOT was varied in 5-ms step from 0 to 70 ms. We also obtained data for noise bursts with gaps, of duration equal to the VOT, embedded in noise 5 ms after the onset. Both stimuli were presented at 65 dB SPL. Minimum VOT and early-gap duration were defined as the lowest value in which an on-response, significantly above the spontaneous activity, to both the leading and trailing noise bursts or vowel was obtained. The mild chronic noise-induced hearing loss increased the minimum detectable VOT and gap duration by 10 ms. We also analyzed the maximum firing rate (FRmax) and the latency of the responses as a function of VOT and gap duration and found a significant reduction in the FRmax to the trailing noise burst for gap durations above 50 ms. This suggests that mild hearing loss acquired in early age may affect cortical temporal processing in adulthood.
Schadwinkel, Stefan; Gutschalk, Alexander
A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Krishnan, Ananthanarayan; Suresh, Chandan H; Gandour, Jackson T
Language experience shapes encoding of pitch-relevant information at both brainstem and cortical levels of processing. Pitch height is a salient dimension that orders pitch from low to high. Herein we investigate the effects of language experience (Chinese, English) in the brainstem and cortex on (i) neural responses to variations in pitch height, (ii) presence of asymmetry in cortical pitch representation, and (iii) patterns of relative changes in magnitude of pitch height between these two levels of brain structure. Stimuli were three nonspeech homologs of Mandarin Tone 2 varying in pitch height only. The frequency-following response (FFR) and the cortical pitch-specific response (CPR) were recorded concurrently. At the Fz-linked T7/T8 site, peak latency of Na, Pb, and Nb decreased with increasing pitch height for both groups. Peak-to-peak amplitude of Na-Pb and Pb-Nb increased with increasing pitch height across groups. A language-dependent effect was restricted to Na-Pb; the Chinese had larger amplitude than the English group. At temporal sites (T7/T8), the Chinese group had larger amplitude, as compared to English, across stimuli, but also limited to the Na-Pb component and right temporal site. In the brainstem, F0 magnitude decreased with increasing pitch height; Chinese had larger magnitude across stimuli. A comparison of CPR and FFR responses revealed distinct patterns of relative changes in magnitude common to both groups. CPR amplitude increased and FFR amplitude decreased with increasing pitch height. Experience-dependent effects on CPR components vary as a function of neural sensitivity to pitch height within a particular temporal window (Na-Pb). Differences between the auditory brainstem and cortex imply distinct neural mechanisms for pitch extraction at both levels of brain structure. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Beissner, Florian; Henke, Christian
Functional magnetic resonance imaging (fMRI) has been used for more than a decade to investigate possible supraspinal mechanisms of acupuncture stimulation. More than 60 studies and several review articles have been published on the topic. However, till now some acupuncture-fMRI studies have not adopted all methodological standards applied to most other fMRI studies. In this critical review, we comment on some of the problems including the choice of baseline, interpretation of deactivations, attention control and implications of different group statistics. We illustrate the possible impact of these problems by focussing on some early findings, namely activations of visual and auditory cortical areas, when acupoints were stimulated that are believed to have a therapeutic effect on vision or hearing in traditional Chinese medicine. While we are far from questioning the validity of using fMRI for the study of acupuncture effects, we think that activations reported by some of these studies were probably not a direct result of acupuncture stimulation but rather attributable to one or more of the methodological problems covered here. Finally, we try to offer solutions for these problems where possible.
Full Text Available In this work we propose a biologically realistic local cortical circuit model (LCCM, based on neural masses, that incorporates important aspects of the functional organization of the brain that have not been covered by previous models: (1 activity dependent plasticity of excitatory synaptic couplings via depleting and recycling of neurotransmitters and (2 realistic inter-laminar dynamics via laminar-specific distribution of and connections between neural populations. The potential of the LCCM was demonstrated by accounting for the process of auditory habituation. The model parameters were specified using Bayesian inference. It was found that: (1 besides the major serial excitatory information pathway (layer 4 to layer 2/3 to layer 5/6, there exists a parallel "short-cut" pathway (layer 4 to layer 5/6, (2 the excitatory signal flow from the pyramidal cells to the inhibitory interneurons seems to be more intra-laminar while, in contrast, the inhibitory signal flow from inhibitory interneurons to the pyramidal cells seems to be both intra- and inter-laminar, and (3 the habituation rates of the connections are unsymmetrical: forward connections (from layer 4 to layer 2/3 are more strongly habituated than backward connections (from Layer 5/6 to layer 4. Our evaluation demonstrates that the novel features of the LCCM are of crucial importance for mechanistic explanations of brain function. The incorporation of these features into a mass model makes them applicable to modeling based on macroscopic data (like EEG or MEG, which are usually available in human experiments. Our LCCM is therefore a valuable building block for future realistic models of human cognitive function.
Fallon, James B; Irving, Sam; Pannu, Satinderpall S; Tooker, Angela C; Wise, Andrew K; Shepherd, Robert K; Irvine, Dexter R F
Current source density analysis of recordings from penetrating electrode arrays has traditionally been used to examine the layer- specific cortical activation and plastic changes associated with changed afferent input. We report on a related analysis, the second spatial derivative (SSD) of surface local field potentials (LFPs) recorded using custom designed thin-film polyimide substrate arrays. SSD analysis of tone- evoked LFPs generated from the auditory cortex under the recording array demonstrated a stereotypical single local minimum, often flanked by maxima on both the caudal and rostral sides. In contrast, tone-pips at frequencies not represented in the region under the array, but known (on the basis of normal tonotopic organization) to be represented caudal to the recording array, had a more complex pattern of many sources and sinks. Compared to traditional analysis of LFPs, SSD analysis produced a tonotopic map that was more similar to that obtained with multi-unit recordings in a normal-hearing animal. Additionally, the statistically significant decrease in the number of acoustically responsive cortical locations in partially deafened cats following 6 months of cochlear implant use compared to unstimulated cases observed with multi-unit data (p=0.04) was also observed with SSD analysis (p=0.02), but was not apparent using traditional analysis of LFPs (p=0.6). SSD analysis of surface LFPs from the thin-film array provides a rapid and robust method for examining the spatial distribution of cortical activity with improved spatial resolution compared to more traditional LFP recordings. Copyright © 2016 Elsevier B.V. All rights reserved.
Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study.
Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Munesue, Toshio; Ono, Yasuki; Tsubokawa, Tsunehisa; Haruta, Yasuhiro; Oi, Manabu; Niida, Yo; Remijn, Gerard B; Takahashi, Tsutomu; Suzuki, Michio; Higashida, Haruhiro; Minabe, Yoshio
significant predictor of shorter P50m latency in the right hemisphere. Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Pickles, James O
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
Coleman, Paul D.; And Others
Numbers of neurons and glia were counted in the cerebral cortex of one case of autism and two age- and sex-matched controls. Cell counts were made in primary auditory cortex, Broca's speech area, and auditory association cortex. No consistent differences in cell density were found between brains of autistic and control patients. (Author/CL)
Murakami, Takenobu; Restle, Julia; Ziemann, Ulf
A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…
Brown, Mark S; Singel, Debra; Hepburn, Susan; Rojas, Donald C
Increased glutamate levels have been reported in the hippocampal and frontal regions of persons with autism using proton magnetic resonance spectroscopy ((1)H-MRS). Although autism spectrum disorders (ASDs) are highly heritable, MRS studies have not included relatives of persons with ASD. We therefore conducted a study to determine if glutamate levels are elevated in people with autism and parents of children with autism. Single-voxel, point-resolved spectroscopy data were acquired at 3T for left and right hemisphere auditory cortical voxels in 13 adults with autism, 15 parents of children with autism, and 15 adult control subjects. The primary measure was glutamate + glutamine (Glx). Additional measures included n-acetyl-aspartate (NAA), choline (Cho), myoinositol (mI), and creatine (Cr). The autism group had significantly higher Glx, NAA, and Cr concentrations than the control subjects. Parents did not differ from control subjects on any measures. No significant differences in Cho or mI levels were seen among groups. No reliable correlations between autism symptom measures, and MRS variables were seen after Bonferroni correction for multiple comparisons. The elevation in Glx in autism is consistent with prior MRS data in the hippocampus and frontal lobe and may suggest increased cortical excitability. Increased NAA and Cr may indicate brain metabolism disturbances in autism. In the current study, we found no reliable evidence of a familial effect for any spectroscopy measure. This may indicate that these metabolites have no heritable component in autism, the presence of a compensatory factor in parents, or sample-specific limitations such as the participation of singleton families. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya
The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex. © EEG and Clinical Neuroscience Society (ECNS) 2014.
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Suga, 1965), vocalization specific Peurons in squirrel monkeys (Newman and Wollberg, 1973a), song selective neurons in birds (Langner et al., 1981...191-205., 1990. Heffner, H.E. and Heffner, R.S. Effect of restricted cortical lesions on absolute thresholds and aphasia -like deficits in Japanese
Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta
Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.
Hackett, Troy A; Rinaldi Barkat, Tania; O'Brien, Barbara M J
The mouse sensory neocortex is reported to lack several hallmark features of topographic organization such as ocular dominance and orientation columns in primary visual cortex or fine-scale tonotopy in primary auditory cortex (AI). Here, we re-examined the question of auditory functional topography...... by aligning ultra-dense receptive field maps from the auditory cortex and thalamus of the mouse in vivo with the neural circuitry contained in the auditory thalamocortical slice in vitro. We observed precisely organized tonotopic maps of best frequency (BF) in the middle layers of AI and the anterior auditory...... field as well as in the ventral and medial divisions of the medial geniculate body (MGBv and MGBm, respectively). Tracer injections into distinct zones of the BF map in AI retrogradely labeled topographically organized MGBv projections and weaker, mixed projections from MGBm. Stimulating MGBv along...
Full Text Available Episodic memory or the ability to store context-rich information about everyday events depends on the hippocampal formation (entorhinal cortex, subiculum, presubiculum, parasubiculum, hippocampus proper, and dentate gyrus. A substantial amount of behavioral-lesion and anatomical studies have contributed to our understanding of the organization of how visual stimuli are retained in episodic memory. However, whether auditory memory is organized similarly is still unclear. One hypothesis is that, like the ‘visual ventral stream’ for which the connections of the inferior temporal gyrus with the perirhinal cortex are necessary for visual recognition in monkeys, direct connections between the auditory association areas of the superior temporal gyrus and the hippocampal formation and with the parahippocampal region (temporal pole, perhirinal, and posterior parahippocampal cortices might also underlie recognition memory for sounds. Alternatively, the anatomical organization of memory could be different in audition. This alternative ‘indirect stream’ hypothesis posits that, unlike the visual association cortex, the majority of auditory association cortex makes one or more synapses in intermediate, polymodal areas, where they may integrate information from other sensory modalities, before reaching the medial temporal memory system. This review considers anatomical studies that can support either one or both hypotheses – focusing on anatomical studies on the primate brain that have reported not only direct auditory association connections with medial temporal areas, but, importantly, also possible indirect pathways for auditory information to reach the medial temporal lobe memory system.
Puvvada, Krishna C; Simon, Jonathan Z
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory
Langers, DRM; van Dijk, P; Backes, WH
Although it is known that responses in the auditory cortex are evoked predominantly contralateral to the side of stimulation, the lateralization of responses at lower levels in the human central auditory system has hardly been studied. Furthermore, little is known on the functional interactions
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Schneider, David M; Mooney, Richard
In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Picton, T. W.; Hillyard, S. A.; Krausz, H. I.; Galambos, R.
Fifteen distinct components can be identified in the scalp recorded average evoked potential to an abrupt auditory stimulus. The early components occurring in the first 8 msec after a stimulus represent the activation of the cochlea and the auditory nuclei of the brainstem. The middle latency components occurring between 8 and 50 msec after the stimulus probably represent activation of both auditory thalamus and cortex but can be seriously contaminated by concurrent scalp muscle reflex potentials. The longer latency components occurring between 50 and 300 msec after the stimulus are maximally recorded over fronto-central scalp regions and seem to represent widespread activation of frontal cortex.
King, Andrew J
Two recent studies have described how the coupling of excitatory and inhibitory inputs to neurons in the auditory cortex changes during development. This process is driven by experience and, once complete, may limit the plasticity of the cortex in later life. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ford, Judith M; Roach, Brian J; Jorgensen, Kasper W; Turner, Jessica A; Brown, Gregory G; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S; Lim, Kelvin O; Glover, Gary; Potkin, Steven G; Mathalon, Daniel H
Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically "tuned" to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Although "voices" are the anticipated sensory experience, it appears that even primary auditory cortex is "turned on" and "tuned in" to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample.
enhanced relative to the non-musicians for both resolved and unresolved harmonics in the right auditory cortex, right frontal regions and inferior colliculus. However, the increase in neural activation in the right auditory cortex of musicians was predictive of the increased pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... of training, which seemed to be specific to the stimuli containing resolved harmonics. Finally, a functional magnetic resonance imaging paradigm was used to examine the response of the auditory cortex to resolved and unresolved harmonics in musicians and non-musicians. The neural responses in musicians were...
Pillion, Joseph P; Shiffler, Dorothy E; Hoon, Alexander H; Lin, Doris D M
To describe auditory function in an individual with bilateral damage to the temporal and parietal cortex. Case report. A previously healthy 17-year old male is described who sustained extensive cortical injury following an episode of viral meningoencephalitis. He developed status epilepticus and required intubation and multiple anticonvulsants. Serial brain MRIs showed bilateral temporoparietal signal changes reflecting extensive damage to language areas and the first transverse gyrus of Heschl on both sides. The patient was referred for assessment of auditory processing but was so severely impaired in speech processing that he was unable to complete any formal tests of his speech processing abilities. Audiological assessment utilizing objective measures of auditory function established the presence of normal peripheral auditory function and illustrates the importance of the use of objective measures of auditory function in patients with injuries to the auditory cortex. Use of objective measures of auditory function is essential in establishing the presence of normal peripheral auditory function in individuals with cortical damage who may not be able to cooperate sufficiently for assessment utilizing behavioral measures of auditory function.
Mock, Jeffrey R; Foundas, Anne L; Golob, Edward J
Previous studies have shown that speaking affects auditory and motor cortex responsiveness, which may reflect the influence of motor efference copy. If motor efference copy is involved, it would also likely influence auditory and motor cortical activity when preparing to speak. We tested this hypothesis by using auditory event-related potentials and transcranial magnetic stimulation (TMS) of the motor cortex. In the speech condition subjects were visually cued to prepare a vocal response to a subsequent target, which was compared to a control condition without speech preparation. Auditory and motor cortex responsiveness at variable times between the cue and target were probed with an acoustic stimulus (Experiment 1, tone or consonant-vowels) or motor cortical TMS (Experiment 2). Acoustic probes delivered shortly before targets elicited a fronto-central negative potential in the speech condition. Current density analysis showed that auditory cortical activity was attenuated at the beginning of the slow potential in the speech condition. Sensory potentials in response to probes had shorter latencies (N100) and larger amplitudes (P200) when consonant-vowels matched the sound of cue words. Motor cortex excitability was greater in the speech than in the control condition at all time points before picture onset. The results suggest that speech preparation induces top-down regulation of sensory and motor cortex responsiveness, with different time courses for auditory and motor systems. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Coffman, Brian A; Haigh, Sarah M; Murphy, Timothy K; Leiter-Mcbeth, Justin; Salisbury, Dean F
Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination). Both ASP and N2 modulation are impaired in long-term schizophrenia. To determine whether these deficits are present early in disease course, we compared ASP and N2 modulation between individuals at their first episode of psychosis within the schizophrenia spectrum (FE, N=20) and matched healthy controls (N=24). The ASP was reduced by >40% in FE; however, N2 modulation was not statistically different from HC. This suggests that auditory segmentation (ASP) deficits exist at this early stage of schizophrenia, but auditory edge detection (N2 modulation) is relatively intact. In a subset of subjects for whom structural MRIs were available (N=14 per group), ASP sources were localized to midcingulate cortex (MCC) and temporal auditory cortex. Neurophysiological activity in FE was reduced in MCC, an area linked to aberrant perceptual organization, negative symptoms, and cognitive dysfunction in schizophrenia, but not temporal auditory cortex. This study supports the validity of the ASP for measurement of auditory object segmentation and suggests that the ASP may be useful as an early index of schizophrenia-related MCC dysfunction. Further, ASP deficits may serve as a viable biomarker of disease presence. Copyright © 2017 Elsevier B.V. All rights reserved.
Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar
Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Gutschalk, Alexander; Dykstra, Andrew R
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.
Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.
David L Woods
Full Text Available While auditory cortex in non-human primates has been subdivided into multiple functionally-specialized auditory cortical fields (ACFs, the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and nonattended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to nonattended sounds. Three centrally-located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally-defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.
Agnew, Z K; McGettigan, C; Banks, B; Scott, S K
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior-posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. Copyright © 2012 Elsevier Inc. All rights reserved.
Xiong, Ying; Zhang, Yonghai; Yan, Jun
Auditory learning or experience induces large-scale neural plasticity in not only the auditory cortex but also in the auditory thalamus and midbrain. Such plasticity is guided by acquired sound (sound-specific auditory plasticity). The mechanisms involved in this process have been studied from various approaches and support the presence of a core neural circuit consisting of a subcortico-cortico-subcortical tonotopic loop supplemented by neuromodulatory (e.g., cholinergic) inputs. This circuit has three key functions essential for establishing large-scale and sound-specific plasticity in the auditory cortex, auditory thalamus and auditory midbrain. They include the presence of sound information for guiding the plasticity, the communication between the cortex, thalamus and midbrain for coordinating the plastic changes and the adjustment of the circuit status for augmenting the plasticity. This review begins with an overview of sound-specific auditory plasticity in the central auditory system. It then introduces the core neural circuit which plays an essential role in inducing sound-specific auditory plasticity. Finally, the core neural circuit and its relationship to auditory learning and experience are discussed.
Full Text Available Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like and associated emotional components, such as distress and mood changes. Source localization of qEEG data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual, auditory cortex (primary and secondary, dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus, parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature.
Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.
Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E
The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Andreas L. Schulz
Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.
Afra, Pegah; Anderson, Jeffrey; Funke, Michael; Johnson, Michael; Matsuo, Fumisuke; Constantino, Tawnya; Warner, Judith
We present a case of acquired auditory-visual synesthesia and its neurophysiological investigation in a healthy 42-year-old woman. She started experiencing persistent positive and intermittent negative visual phenomena at age 37 followed by auditory-visual synesthesia. Her neurophysiological investigation included video-EEG, fMRI, and MEG. Auditory stimuli (700 Hz, 50 ms duration, 0.5 s ISI) were presented binaurally at 60 db above the hearing threshold in a dark room. The patient had bilateral symmetrical auditory-evoked neuromagnetic responses followed by an occipital-evoked field 16.3 ms later. The activation of occipital cortex following auditory stimuli may represent recruitment of existing cross-modal sensory pathways.
Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo
In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.
Juan San Juan
Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to
San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom
Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan
Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.
Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku
The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.
Christian F Altmann
Full Text Available Ranging of auditory objects relies on several acoustic cues and is possibly modulated by additional visual information. Sound pressure level can serve as a cue for distance perception because it decreases with increasing distance. In this agnetoencephalography (MEG experiment, we tested whether psychophysical loudness judgment and N1m MEG responses are modulated by visual distance cues. To this end, we paired noise bursts at different sound pressure levels with synchronous visual cues at different distances. We hypothesized that noise bursts paired with far visual cues will be perceived louder and result in increased N1m amplitudes compared to a pairing with close visual cues. The rationale behind this was that listeners might compensate the visually induced object distance when processing loudness. Psychophysically, we observed no significant modulation of loudness judgments by visual cues. However, N1m MEG responses at about 100 ms after stimulus onset were significantly stronger for far versus close visual cues in the left auditory cortex. N1m responses in the right auditory cortex increased with increasing sound pressure level, but were not modulated by visual distance cues. Thus, our results suggest an audio-visual interaction in the left auditory cortex that is possibly related to cue integration for auditory distance processing.
Full Text Available Auditory verbal hallucinations (AVH in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1 at rest and (2 during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia.
Hari M Bharadwaj
Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory
Dr. Hassan Ashayeri
Full Text Available Studying auditory discrimination in children and the role it plays in acquiring language skills is of great importance. Also the relationship between articulation disorder and the ability to discriminate the speech sound is an important topic for speech and language researchers. Previous event- related potentials (ERPs studies have suggested a possible participation of the visual cortex of the blind subjects were asked to discriminate 100 couple Farsi words (auditory discrimination tack while they were listening them from recorded tape. The results showed that the blinds were able to discriminate heard material better than sighted subjects. (Prro.05 According to this study in blind subjects conical are as normally reserved for vision may be activated by other sensory modalities. This is in accordance with previous studies. We suggest that auditory cortex expands in blind humans.
Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.
Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.
... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.
... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
Sato, Marc; Tremblay, Pascale; Gracco, Vincent L.
Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech…
Brishna Soraya Kamal
Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.
Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.
Hannah L. Golden
Full Text Available Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13 and age-matched healthy individuals (n = 17 underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.
Mohammad hosein Hekmat Ara
Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.
Full Text Available Humans are highly adept at processing speech. Recently, it has been shown that slow temporal information in speech (i.e., the envelope of speech is critical for speech comprehension. Furthermore, it has been found that evoked electric potentials in human cortex are correlated with the speech envelope. However, it has been unclear whether this essential linguistic feature is encoded differentially in specific regions, or whether it is represented throughout the auditory system. To answer this question, we recorded neural data with high temporal resolution directly from the cortex while human subjects listened to a spoken story. We found that the gamma activity in human auditory cortex robustly tracks the speech envelope. The effect is so marked that it is observed during a single presentation of the spoken story to each subject. The effect is stronger in regions situated relatively early in the auditory pathway (belt areas compared to other regions involved in speech processing, including the superior temporal gyrus (STG and the posterior inferior frontal gyrus (Broca's region. To further distinguish whether speech envelope is encoded in the auditory system as a phonological (speech-related, or instead as a more general acoustic feature, we also probed the auditory system with a melodic stimulus. We found that belt areas track melody envelope weakly, and as the only region considered. Together, our data provide the first direct electrophysiological evidence that the envelope of speech is robustly tracked in non-primary auditory cortex (belt areas in particular, and suggest that the considered higher-order regions (STG and Broca's region partake in a more abstract linguistic analysis.
Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Saoud, Houda; Josse, Goulven; Bertasi, Eric; Truy, Eric; Chait, Maria; Giraud, Anne-Lise
Asymmetry in auditory cortical oscillations could play a role in speech perception by fostering hemispheric triage of information across the two hemispheres. Due to this asymmetry, fast speech temporal modulations relevant for phonemic analysis could be best perceived by the left auditory cortex, while slower modulations conveying vocal and paralinguistic information would be better captured by the right one. It is unclear, however, whether and how early oscillation-based selection influences speech perception. Using a dichotic listening paradigm in human participants, where we provided different parts of the speech envelope to each ear, we show that word recognition is facilitated when the temporal properties of speech match the rhythmic properties of auditory cortices. We further show that the interaction between speech envelope and auditory cortices rhythms translates in their level of neural activity (as measured with fMRI). In the left auditory cortex, the neural activity level related to stimulus-brain rhythm interaction predicts speech perception facilitation. These data demonstrate that speech interacts with auditory cortical rhythms differently in right and left auditory cortex, and that in the latter, the interaction directly impacts speech perception performance.
Park, Hyojin; Ince, Robin A A; Schyns, Philippe G; Thut, Gregor; Gross, Joachim
Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility . We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Katharina S. Rufener
Full Text Available Neural oscillations in the gamma range are the dominant rhythmic activation pattern in the human auditory cortex. These gamma oscillations are functionally relevant for the processing of rapidly changing acoustic information in both speech and non-speech sounds. Accordingly, there is a tight link between the temporal resolution ability of the auditory system and inherent neural gamma oscillations. Transcranial random noise stimulation (tRNS has been demonstrated to specifically increase gamma oscillation in the human auditory cortex. However, neither the physiological mechanisms of tRNS nor the behavioral consequences of this intervention are completely understood. In the present study we stimulated the human auditory cortex bilaterally with tRNS while EEG was continuously measured. Modulations in the participants’ temporal and spectral resolution ability were investigated by means of a gap detection task and a pitch discrimination task. Compared to sham, auditory tRNS increased the detection rate for near-threshold stimuli in the temporal domain only, while no such effect was present for the discrimination of spectral features. Behavioral findings were paralleled by reduced peak latencies of the P50 and N1 component of the auditory event-related potentials (ERP indicating an impact on early sensory processing. The facilitating effect of tRNS was limited to the processing of near-threshold stimuli while stimuli clearly below and above the individual perception threshold were not affected by tRNS. This non-linear relationship between the signal-to-noise level of the presented stimuli and the effect of stimulation further qualifies stochastic resonance (SR as the underlying mechanism of tRNS on auditory processing. Our results demonstrate a tRNS related improvement in acoustic perception of time critical auditory information and, thus, provide further indices that auditory tRNS can amplify the resonance frequency of the auditory system.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Milner, Rafał; Rusiniak, Mateusz; Wolak, Tomasz; Piatkowska-Janko, Ewa; Naumczyk, Patrycja; Bogorodzki, Piotr; Senderski, Andrzej; Ganc, Małgorzata; Skarzyński, Henryk
Processing of auditory information in central nervous system bases on the series of quickly occurring neural processes that cannot be separately monitored using only the fMRI registration. Simultaneous recording of the auditory evoked potentials, characterized by good temporal resolution, and the functional magnetic resonance imaging with excellent spatial resolution allows studying higher auditory functions with precision both in time and space. was to implement the simultaneous AEP-fMRI recordings method for the investigation of information processing at different levels of central auditory system. Five healthy volunteers, aged 22-35 years, participated in the experiment. The study was performed using high-field (3T) MR scanner from Siemens and 64-channel electrophysiological system Neuroscan from Compumedics. Auditory evoked potentials generated by acoustic stimuli (standard and deviant tones) were registered using modified odd-ball procedure. Functional magnetic resonance recordings were performed using sparse acquisition paradigm. The results of electrophysiological registrations have been worked out by determining voltage distributions of AEP on skull and modeling their bioelectrical intracerebral generators (dipoles). FMRI activations were determined on the basis of deviant to standard and standard to deviant functional contrasts. Results obtained from electrophysiological studies have been integrated with functional outcomes. Morphology, amplitude, latency and voltage distribution of auditory evoked potentials (P1, N1, P2) to standard stimuli presented during simultaneous AEP-fMRI registrations were very similar to the responses obtained outside scanner room. Significant fMRI activations to standard stimuli were found mainly in the auditory cortex. Activations in these regions corresponded with N1 wave dipoles modeled based on auditory potentials generated by standard tones. Auditory evoked potentials to deviant stimuli were recorded only outside the MRI
Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.
Clarke, Dave F; Boop, Frederick A; McGregor, Amy L; Perkins, F Frederick; Brewer, Vickie R; Wheless, James W
Ear plugging (placing fingers in or covering the ears) is a clinical seizure semiology that has been described as a response to an unformed, auditory hallucination localized to the superior temporal neocortex. The localizing value of ear plugging in more complex auditory hallucinations may have more involved circuitry. We report on one child, whose aura was a more complex auditory phenomenon, consisting of a door opening and closing, getting louder as the ictus persisted. This child presented, at four years of age, with brief episodes of ear plugging followed by an acute emotional change that persisted until surgical resection of a left mesial frontal lesion at 11 years of age. Scalp video-EEG, magnetic resource imaging, magnetoencephalography, and invasive video-EEG monitoring were carried out. The scalp EEG changes always started after clinical onset. These were not localizing, and encompassed a wide field over the bi-frontal head regions, the left side predominant over the right. Intracranial video-EEG monitoring with subdural electrodes over both frontal and temporal regions localized the seizure-onset to the left mesial frontal lesion. The patient has remained seizure-free since the resection on June 28, 2006, approximately one and a half years ago. Ear plugging in response to simple auditory auras localize to the superior temporal gyrus. If the patient has more complex, formed auditory auras, not only may the secondary auditory areas in the temporal lobe be involved, but one has to entertain the possibility of ictal-onset from the frontal cortex.
Full Text Available Background and Aim: Auditory neuropathy (AN can be diagnosed by abnormal auditory brainstem response (ABR, in the presence of normal cochlear microphonic (CM and otoacoustic emissions (OAEs.The aim of this study was to investigate the ABR and other electrodiagnostic test results of 6 patients suspicious to AN with problems in speech recognition. Materials and Methods: this cross sectional study was conducted on 6 AN patients with different ages evaluated by pure tone audiometry, speech discrimination score (SDS , immittance audiometry. ElectroCochleoGraphy , ABR, middle latency response (MLR, Late latency response (LLR, and OAEs. Results: Behavioral pure tone audiometric tests showed moderate to profound hearing loss. SDS was so poor which is not in accordance with pure tone thresholds. All patients had normal tympanogram but absent acoustic reflexes. CMs and OAEs were within normal limits. There was no contra lateral suppression of OAEs. None of cases had normal ABR or MLR although LLR was recorded in 4. Conclusion: All patients in this study are typical cases of auditory neuropathy. Despite having abnormal input, LLR remains normal that indicates differences in auditory evoked potentials related to required neural synchrony. These findings show that auditory cortex may play a role in regulating presentation of deficient signals along auditory pathways in primary steps.
Martin, Stephanie; Mikutta, Christian; Leonard, Matthew K; Hungate, Dylan; Koelsch, Stefan; Shamma, Shihab; Chang, Edward F; Millán, José Del R; Knight, Robert T; Pasley, Brian N
Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Profant, Oliver; Tintěra, J.; Balogová, Zuzana; Ibrahim, I.; Jílek, Milan; Syka, Josef
Roč. 10, č. 3 (2015), e0116692 E-ISSN 1932-6203 R&D Projects: GA ČR GAP304/10/1872; GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:68378041 Keywords : age-related-changes * hearing-loss * hemispheric-asymmetry * speech-perception * elderly listeners * cognitive decline * neural mechanisms * working-memory * older-adults * presbycusis Subject RIV: FH - Neurology Impact factor: 3.057, year: 2015
Jeong, Jin Kwon; Tremere, Liisa A.; Ryave, Michael J.; Vuong, Victor C.; Pinaud, Raphael
Recent studies on the anatomical and functional organization of GABAergic networks in central auditory circuits of the zebra finch have highlighted the strong impact of inhibitory mechanisms on both the central encoding and processing of acoustic information in a vocal learning species. Most of this work has focused on the caudomedial nidopallium (NCM), a forebrain area postulated to be the songbird analogue of the mammalian auditory association cortex. NCM houses neurons with selective respo...
Full Text Available Transcranial direct current stimulation (tDCS is attracting increasing interest because of its potential for therapeutic use. While its effects have been investigated mainly with motor and visual tasks, less is known in the auditory domain. Past tDCS studies with auditory tasks demonstrated various behavioural outcomes, possibly due to differences in stimulation parameters or task measurements used in each study. Further research using well-validated tasks are therefore required for clarification of behavioural effects of tDCS on the auditory system. Here, we took advantage of findings from a prior functional magnetic resonance imaging study, which demonstrated that the right auditory cortex is modulated during fine-grained pitch learning of microtonal melodic patterns. Targeting the right auditory cortex with tDCS using this same task thus allowed us to test the hypothesis that this region is causally involved in pitch learning. Participants in the current study were trained for three days while we measured pitch discrimination thresholds using microtonal melodies on each day using a psychophysical staircase procedure. We administered anodal, cathodal, or sham tDCS to three groups of participants over the right auditory cortex on the second day of training during performance of the task. Both the sham and the cathodal groups showed the expected significant learning effect (decreased pitch threshold over the three days of training; in contrast we observed a blocking effect of anodal tDCS on auditory pitch learning, such that this group showed no significant change in thresholds over the three days. The results support a causal role for the right auditory cortex in pitch discrimination learning.
Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal
Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.
Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.
Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara
Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
Sato, Marc; Troille, Emilie; Ménard, Lucie; Cathiard, Marie-Agnès; Gracco, Vincent
The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.
Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
Pecenka, Nadine; Engel, Annerose; Keller, Peter E
Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events) and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS) and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons). Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1) a distributed network of cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex) and (2) medial cortical areas (medial prefrontal cortex, posterior cingulate cortex). While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
Hall, M.; Smeele, P.M.T.; Kuhl, P.K.
The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual
Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic
We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.
Deen, Ben; Saxe, Rebecca; Bedny, Marina
In congenital blindness, the occipital cortex responds to a range of nonvisual inputs, including tactile, auditory, and linguistic stimuli. Are these changes in functional responses to stimuli accompanied by altered interactions with nonvisual functional networks? To answer this question, we introduce a data-driven method that searches across cortex for functional connectivity differences across groups. Replicating prior work, we find increased fronto-occipital functional connectivity in congenitally blind relative to blindfolded sighted participants. We demonstrate that this heightened connectivity extends over most of occipital cortex but is specific to a subset of regions in the inferior, dorsal, and medial frontal lobe. To assess the functional profile of these frontal areas, we used an n-back working memory task and a sentence comprehension task. We find that, among prefrontal areas with overconnectivity to occipital cortex, one left inferior frontal region responds to language over music. By contrast, the majority of these regions responded to working memory load but not language. These results suggest that in blindness occipital cortex interacts more with working memory systems and raise new questions about the function and mechanism of occipital plasticity.
Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel
Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations. © 2015 Wiley Periodicals, Inc.
Costa-Faidella, Jordi; Baldeweg, Torsten; Grimm, Sabine; Escera, Carles
Neural activity in the auditory system decreases with repeated stimulation, matching stimulus probability in multiple timescales. This phenomenon, known as stimulus-specific adaptation, is interpreted as a neural mechanism of regularity encoding aiding auditory object formation. However, despite the overwhelming literature covering recordings from single-cell to scalp auditory-evoked potential (AEP), stimulation timing has received little interest. Here we investigated whether timing predictability enhances the experience-dependent modulation of neural activity associated with stimulus probability encoding. We used human electrophysiological recordings in healthy participants who were exposed to passive listening of sound sequences. Pure tones of different frequencies were delivered in successive trains of a variable number of repetitions, enabling the study of sequential repetition effects in the AEP. In the predictable timing condition, tones were delivered with isochronous interstimulus intervals; in the unpredictable timing condition, interstimulus intervals varied randomly. Our results show that unpredictable stimulus timing abolishes the early part of the repetition positivity, an AEP indexing auditory sensory memory trace formation, while leaving the later part (≈ >200 ms) unaffected. This suggests that timing predictability aids the propagation of repetition effects upstream the auditory pathway, most likely from association auditory cortex (including the planum temporale) toward primary auditory cortex (Heschl's gyrus) and beyond, as judged by the timing of AEP latencies. This outcome calls for attention to stimulation timing in future experiments regarding sensory memory trace formation in AEP measures and stimulus probability encoding in animal models.
Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee
... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.
Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies
Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD, the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years and 13 typically developing boys (mean age, 9.45 ± 1.51 years. We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.
Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.
Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610
Joshua R Gold
Full Text Available The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighting of connections in neural networks putatively required for optimising performance and behaviour. As an avenue for investigation, studies centred around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple – if not all – levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioural implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism’s competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.
De Groof, Geert; Poirier, Colline; George, Isabelle; Hausberger, Martine; Van der Linden, Annemie
Songbirds are an excellent model for investigating the perception of learned complex acoustic communication signals. Male European starlings (Sturnus vulgaris) sing throughout the year distinct types of song that bear either social or individual information. Although the relative importance of social and individual information changes seasonally, evidence of functional seasonal changes in neural response to these songs remains elusive. We thus decided to use in vivo functional magnetic resonance imaging (fMRI) to examine auditory responses of male starlings that were exposed to songs that convey different levels of information (species-specific and group identity or individual identity), both during (when mate recognition is particularly important) and outside the breeding season (when group recognition is particularly important). We report three main findings: (1) the auditory area caudomedial nidopallium (NCM), an auditory region that is analogous to the mammalian auditory cortex, is clearly involved in the processing/categorization of conspecific songs; (2) season-related change in differential song processing is limited to a caudal part of NCM; in the more rostral parts, songs bearing individual information induce higher BOLD responses than songs bearing species and group information, regardless of the season; (3) the differentiation between songs bearing species and group information and songs bearing individual information seems to be biased toward the right hemisphere. This study provides evidence that auditory processing of behaviorally-relevant (conspecific) communication signals changes seasonally, even when the spectro-temporal properties of these signals do not change. PMID:24391561
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca R.
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital (“visual”) brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4–17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). ...
Jonathan E Peelle
Full Text Available Functional magnetic resonance imaging (fMRI studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.
Peelle, Jonathan E
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.
Maria Neimark Geffen
Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.
Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).
Kaya, Emine Merve; Elhilali, Mounya
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Mann, Philip H.; Suiter, Patricia A.
This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…
Araneda, Rodrigo; De Volder, Anne G; Deggouj, Naïma; Philippot, Pierre; Heeren, Alexandre; Lacroix, Emilie; Decat, Monique; Rombaux, Philippe; Renier, Laurent
Tinnitus is the perception of a sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. Here, we hypothesize that these brain alterations affect top-down cognitive control mechanisms that play a role in the regulation of sensations, emotions and attention resources. The efficiency of the executive control as well as simple reaction speed and processing speed were evaluated in tinnitus participants (TP) and matched control subjects (CS) in both the auditory and the visual modalities using a spatial Stroop paradigm. TP were slower and less accurate than CS during both the auditory and the visual spatial Stroop tasks, while simple reaction speed and stimulus processing speed were affected in TP in the auditory modality only. Tinnitus is associated both with modality-specific deficits along the auditory processing system and an impairment of cognitive control mechanisms that are involved both in vision and audition (i.e. that are supra-modal). We postulate that this deficit in the top-down cognitive control is a key-factor in the development and maintenance of tinnitus and may also explain some of the cognitive difficulties reported by tinnitus sufferers.
Full Text Available Previous studies have shown that sodium salicylate (SS activates not only central auditory structures, but also nonauditory regions associated with emotion and memory. To identify electrophysiological changes in the nonauditory regions, we recorded sound-evoked local field potentials and multiunit discharges from the striatum, amygdala, hippocampus, and cingulate cortex after SS-treatment. The SS-treatment produced behavioral evidence of tinnitus and hyperacusis. Physiologically, the treatment significantly enhanced sound-evoked neural activity in the striatum, amygdala, and hippocampus, but not in the cingulate. The enhanced sound evoked response could be linked to the hyperacusis-like behavior. Further analysis showed that the enhancement of sound-evoked activity occurred predominantly at the midfrequencies, likely reflecting shifts of neurons towards the midfrequency range after SS-treatment as observed in our previous studies in the auditory cortex and amygdala. The increased number of midfrequency neurons would lead to a relative higher number of total spontaneous discharges in the midfrequency region, even though the mean discharge rate of each neuron may not increase. The tonotopical overactivity in the midfrequency region in quiet may potentially lead to tonal sensation of midfrequency (the tinnitus. The neural changes in the amygdala and hippocampus may also contribute to the negative effect that patients associate with their tinnitus.
Vetter, Petra; Smith, Fraser W; Muckli, Lars
Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus (e.g., ). However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections [2-5]. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level (e.g., ), in line with predictive coding models [7-10]. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.
consequences of these injuries likely progresses with age . A comprehensive understanding of the structural and molecular components of the injury...Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT As a consequence of advances in military medical care there...brainstem response, balance disorder, mouse, pathology , auditory cortex, brainstem, cerebellum, neuron 3. Accomplishments o What were the major goals for
Larry E Roberts
Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.
The orbitofrontal cortex is associated with encoding the significance of stimuli within an emotional context, and its connections can be understood in this light. This large cortical region is architectonically heterogeneous, but its connections and functions can be summarized by a broad grouping of areas by cortical type into posterior and anterior sectors. The posterior (limbic) orbitofrontal region is composed of agranular and dysgranular-type cortices and has unique connections with primary olfactory areas and rich connections with high-order sensory association cortices. Posterior orbitofrontal areas are further distinguished by dense and distinct patterns of connections with the amygdala and memory-related anterior temporal lobe structures that may convey signals about emotional import and their memory. The special sets of connections suggest that the posterior orbitofrontal cortex is the primary region for the perception of emotions. In contrast to orbitofrontal areas, posterior medial prefrontal areas in the anterior cingulate are not multi-modal, but have strong connections with auditory association cortices, brain stem vocalization, and autonomic structures, in pathways that may mediate emotional communication and autonomic activation in emotional arousal. Posterior orbitofrontal areas communicate with anterior orbitofrontal areas and, through feedback projections, with lateral prefrontal and other cortices, suggesting a sequence of information processing for emotions. Pathology in orbitofrontal cortex may remove feedback input to sensory cortices, dissociating emotional context from sensory content and impairing the ability to interpret events.
Ohl, Frank W
Rhythmic activity appears in the auditory cortex in both microscopic and macroscopic observables and is modulated by both bottom-up and top-down processes. How this activity serves both types of processes is largely unknown. Here we review studies that have recently improved our understanding of potential functional roles of large-scale global dynamic activity patterns in auditory cortex. The experimental paradigm of auditory category learning allowed critical testing of the hypothesis that global auditory cortical activity states are associated with endogenous cognitive states mediating the meaning associated with an acoustic stimulus rather than with activity states that merely represent the stimulus for further processing. Copyright © 2014. Published by Elsevier Ltd.
Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Jonathan Murray Lovell
Full Text Available The response of neurones in the Red Nucleus pars magnocellularis (RNm to both tone bursts and electrical stimulation were observed in three cynomolgus monkeys (Macaca fascicularis, in a series of studies primarily designed to characterise the influence of the dopaminergic ventral midbrain on auditory processing. Compared to its role in motor behaviour, little is known about the sensory response properties of neurons in the red nucleus; particularly those concerning the auditory modality. Sites in the RN were recognised by observing electrically evoked body movements characteristic for this deep brain structure. In this study we applied brief monopolar electrical stimulation to 118 deep brain sites at a maximum intensity of 200 µA, thus evoking minimal body movements. Auditory sensitivity of RN neurons was analysed more thoroughly at 15 sites, with the majority exhibiting broad tuning curves and phase locking up to 1.03 kHz. Since the RN appears to receive inputs from a very early stage of the ascending auditory system, our results suggest that sounds can modify the motor control exerted by this brain nucleus. At selected locations, we also tested for the presence of functional connections between the RN and the auditory cortex by inserting additional microelectrodes into the auditory cortex and investigating how action potentials and local field potentials were affected by electrical stimulation of the RN.
Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.
Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.
The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671
Chen, Joyce L; Penhune, Virginia B; Zatorre, Robert J
Much is known about the motor system and its role in simple movement execution. However, little is understood about the neural systems underlying auditory-motor integration in the context of musical rhythm, or the enhanced ability of musicians to execute precisely timed sequences. Using functional magnetic resonance imaging, we investigated how performance and neural activity were modulated as musicians and nonmusicians tapped in synchrony with progressively more complex and less metrically structured auditory rhythms. A functionally connected network was implicated in extracting higher-order features of a rhythm's temporal structure, with the dorsal premotor cortex mediating these auditory-motor interactions. In contrast to past studies, musicians recruited the prefrontal cortex to a greater degree than nonmusicians, whereas secondary motor regions were recruited to the same extent. We argue that the superior ability of musicians to deconstruct and organize a rhythm's temporal structure relates to the greater involvement of the prefrontal cortex mediating working memory.
Skoe, Erika; Kraus, Nina
Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...
Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis
Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.
Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis
Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261
Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Full Text Available Abstract Background Schizophrenia is a chronic and disabling disease that presents with delusions and hallucinations. Auditory hallucinations are usually expressed as voices speaking to or about the patient. Previous studies have examined the effect of repetitive transcranial magnetic stimulation (TMS over the temporoparietal cortex on auditory hallucinations in schizophrenic patients. Our aim was to explore the potential effect of deep TMS, using the H coil over the same brain region on auditory hallucinations. Patients and methods Eight schizophrenic patients with refractory auditory hallucinations were recruited, mainly from Beer Ya'akov Mental Health Institution (Tel Aviv university, Israel ambulatory clinics, as well as from other hospitals outpatient populations. Low-frequency deep TMS was applied for 10 min (600 pulses per session to the left temporoparietal cortex for either 10 or 20 sessions. Deep TMS was applied using Brainsway's H1 coil apparatus. Patients were evaluated using the Auditory Hallucinations Rating Scale (AHRS as well as the Scale for the Assessment of Positive Symptoms scores (SAPS, Clinical Global Impressions (CGI scale, and the Scale for Assessment of Negative Symptoms (SANS. Results This preliminary study demonstrated a significant improvement in AHRS score (an average reduction of 31.7% ± 32.2% and to a lesser extent improvement in SAPS results (an average reduction of 16.5% ± 20.3%. Conclusions In this study, we have demonstrated the potential of deep TMS treatment over the temporoparietal cortex as an add-on treatment for chronic auditory hallucinations in schizophrenic patients. Larger samples in a double-blind sham-controlled design are now being preformed to evaluate the effectiveness of deep TMS treatment for auditory hallucinations. Trial registration This trial is registered with clinicaltrials.gov (identifier: NCT00564096.
Behler, Oliver; Uppenkamp, Stefan
Loudness is the perceptual correlate of the physical intensity of a sound. However, loudness judgments depend on a variety of other variables and can vary considerably between individual listeners. While functional magnetic resonance imaging (fMRI) has been extensively used to characterize the neural representation of physical sound intensity in the human auditory system, only few studies have also investigated brain activity in relation to individual loudness. The physiological correlate of loudness perception is not yet fully understood. The present study systematically explored the interrelation of sound pressure level, ear of entry, individual loudness judgments, and fMRI activation along different stages of the central auditory system and across hemispheres for a group of normal hearing listeners. 4-kHz-bandpass filtered noise stimuli were presented monaurally to each ear at levels from 37 to 97dB SPL. One diotic condition and a silence condition were included as control conditions. The participants completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound level and loudness estimates were analyzed by means of functional activation maps and linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. Our findings are overall in line with the notion that fMRI activation in several regions within auditory cortex as well as in certain stages of the ascending auditory pathway might be more a direct linear reflection of perceived loudness rather than of sound pressure level. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of level and
Mingote, Susana; de Bruin, Jan P. C.; Feenstra, Matthijs G. P.
We trained rats to learn that an auditory stimulus predicted delivery of reward pellets in the Skinner box. After 2 d of training, we measured changes in efflux of noradrenaline (NA) and dopamine (DA) in the medial prefrontal cortex using microdialysis on the third day. Animals were subjected to a
Azmitia, E. C.; Saccomano, Z. T.; Alzoobaee, M. F.; Boldrini, M.; Whitaker-Azmitia, P. M.
In the current work, we conducted an immunocytochemical search for markers of ongoing neurogenesis (e.g. nestin) in auditory cortex from postmortem sections of autism spectrum disorder (ASD) and age-matched control donors. We found nestin labeling in cells of the vascular system, indicating blood vessels plasticity. Evidence of angiogenesis was…
Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant
Diedler, Jennifer; Pietz, Joachim; Brunner, Monika; Hornberger, Cornelia; Bast, Thomas; Rupp, André
We examined basic auditory temporal processing in children with language-based learning problems (LPs) applying magnetencephalography. Auditory-evoked fields of 43 children (27 LP, 16 controls) were recorded while passively listening to 100-ms white noise bursts with temporal gaps of 3, 6, 10 and 30 ms inserted after 5 or 50 ms. The P1m was evaluated by spatio-temporal source analysis. Psychophysical gap-detection thresholds were obtained for the same participants. Thirty-two percent of the LP children were not able to perform the early gap psychoacoustic task. In addition, LP children displayed a significant delay of the P1m during the early gap task. These findings provide evidence for a diminished neuronal representation of short auditory stimuli in the primary auditory cortex of LP children.
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.
Full Text Available It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1, we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency and the overall shape of tonal receptive field, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.
Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular
The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte
The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In
Full Text Available In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices and altered head-related transfer functions. Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject’s own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting.
Full Text Available BACKGROUND: Tinnitus refers to auditory phantom sensation. It is estimated that for 2% of the population this auditory phantom percept severely affects the quality of life, due to tinnitus related distress. Although the overall distress levels do not differ between sexes in tinnitus, females are more influenced by distress than males. Typically, pain, sleep, and depression are perceived as significantly more severe by female tinnitus patients. Studies on gender differences in emotional regulation indicate that females with high depressive symptoms show greater attention to emotion, and use less anti-rumination emotional repair strategies than males. METHODOLOGY: The objective of this study was to verify whether the activity and connectivity of the resting brain is different for male and female tinnitus patients using resting-state EEG. CONCLUSIONS: Females had a higher mean score than male tinnitus patients on the BDI-II. Female tinnitus patients differ from male tinnitus patients in the orbitofrontal cortex (OFC extending to the frontopolar cortex in beta1 and beta2. The OFC is important for emotional processing of sounds. Increased functional alpha connectivity is found between the OFC, insula, subgenual anterior cingulate (sgACC, parahippocampal (PHC areas and the auditory cortex in females. Our data suggest increased functional connectivity that binds tinnitus-related auditory cortex activity to auditory emotion-related areas via the PHC-sgACC connections resulting in a more depressive state even though the tinnitus intensity and tinnitus-related distress are not different from men. Comparing male tinnitus patients to a control group of males significant differences could be found for beta3 in the posterior cingulate cortex (PCC. The PCC might be related to cognitive and memory-related aspects of the tinnitus percept. Our results propose that sex influences in tinnitus research cannot be ignored and should be taken into account in functional
J Gordon Millichap
Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.
Lévêque, Yohana; Schön, Daniele
Several studies on action observation have shown that the biological dimension of movement modulates sensorimotor interactions in perception. In the present fMRI study, we tested the hypothesis that the biological dimension of sound modulates the involvement of the motor system in human auditory perception, using musical tasks. We first localized the vocal motor cortex in each participant. Then we compared the BOLD response to vocal, semi-vocal and non-vocal melody perception, and found greater activity for voice perception in the right sensorimotor cortex. We additionally ran a psychophysiological interaction analysis with the right sensorimotor as a seed, showing that the vocal dimension of the stimuli enhanced the connectivity between the seed region and other important nodes of the auditory dorsal stream. Finally, the participants' vocal ability was negatively correlated to the voice effect in the Inferior Parietal Lobule. These results suggest that the biological dimension of singing-voice impacts the activity within the auditory dorsal stream, probably via a facilitated matching between the perceived sound and the participant motor representations. Copyright © 2015 Elsevier Ltd. All rights reserved.
FitzGerald, Thomas H B; Friston, Karl J; Dolan, Raymond J
Reward outcome signalling in the sensory cortex is held as important for linking stimuli to their consequences and for modulating perceptual learning in response to incentives. Evidence for reward outcome signalling has been found in sensory regions including the visual, auditory and somatosensory cortices across a range of different paradigms, but it is unknown whether the population of neurons signalling rewarding outcomes are the same as those processing predictive stimuli. We addressed this question using a multivariate analysis of high-resolution functional magnetic resonance imaging (fMRI), in a task where subjects were engaged in instrumental learning with visual predictive cues and auditory signalled reward feedback. We found evidence that outcome signals in sensory regions localise to the same areas involved in stimulus processing. These outcome signals are non-specific and we show that the neuronal populations involved in stimulus representation are not their exclusive target, in keeping with theoretical models of value learning. Thus, our results reveal one likely mechanism through which rewarding outcomes are linked to predictive sensory stimuli, a link that may be key for both reward and perceptual learning. © 2013.
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Full Text Available Species-specific vocalizations in mice have frequency-modulated (FM components slower than the lower limit of FM direction selectivity in the core region of the mouse auditory cortex. To identify cortical areas selective to slow frequency modulation, we investigated tonal responses in the mouse auditory cortex using transcranial flavoprotein fluorescence imaging. For differentiating responses to frequency modulation from those to stimuli at constant frequencies, we focused on transient fluorescence changes after direction reversal of temporally repeated and superimposed FM sweeps. We found that the ultrasonic field (UF in the belt cortical region selectively responded to the direction reversal. The dorsoposterior field (DP also responded weakly to the reversal. Regarding the responses in UF, no apparent tonotopic map was found, and the right UF responses were significantly larger in amplitude than the left UF responses. The half-max latency in responses to FM sweeps was shorter in UF compared with that in the primary auditory cortex (A1 or anterior auditory field (AAF. Tracer injection experiments in the functionally identified UF and DP confirmed that these two areas receive afferent inputs from the dorsal part of the medial geniculate nucleus (MG. Calcium imaging of UF neurons stained with fura-2 were performed using a two-photon microscope, and the presence of UF neurons that were selective to both direction and direction reversal of slow frequency modulation was demonstrated. These results strongly suggest a role for UF, and possibly DP, as cortical areas specialized for processing slow frequency modulation in mice.
Hall, J; Hubbard, A; Neely, S; Tubis, A
How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft . Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...
Full Text Available For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1 to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF and posterior auditory field (PAF in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.
Fujiwara, Katsuo; Kunita, Kenji; Kiyota, Naoe; Mammadova, Aida; Irei, Mariko
A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10-20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections.
Patel, Aniruddh D; Iversen, John R
a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This "action simulation for auditory prediction" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
Dana L. Strait
Full Text Available Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not.
Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina
Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Coullon, Gaelle S L; Emir, Uzay E; Fine, Ione; Watkins, Kate E; Bridge, Holly
Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, (1)H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. Copyright © 2015 the American Physiological Society.
A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.
Gröschel, Moritz; Hubert, Nikolai; Müller, Susanne; Ernst, Arne; Basta, Dietmar
Age-related hearing loss (ARHL) represents one of the most common chronic health problems that faces an aging population. In the peripheral auditory system, aging is accompanied by functional loss or degeneration of sensory as well as non-sensory tissue. It has been recently described that besides the degeneration of cochlear structures, the central auditory system is also involved in ARHL. Although mechanisms of central presbycusis are not well understood, previous animal studies have reported some signs of central neurodegeneration in the lower auditory pathway. Moreover, changes in neurophysiology are indicated by alterations in synaptic transmission. In particular, neurotransmission and spontaneous neuronal activity appear to be affected in aging animals. Therefore, it was the aim of the present study to determine the neuronal activity within the central auditory pathway in aging mice over their whole lifespan compared to a control group (young adult animals, ~3months of age) using the non-invasive manganese-enhanced MRI technique. MRI signal strength showed a comparable pattern in most investigated auditory brain areas. An increase in activity was particularly pronounced in the middle-aged groups (13 or 18 months), with the largest effect in the dorsal and ventral cochlear nucleus. In higher auditory structures, namely the inferior colliculus, medial geniculate body and auditory cortex, the enhancement was much less expressed; while a decrease was detected in the superior olivary complex. Interestingly, calcium-dependent activity reduced to control levels in the oldest animals (22 months) in the cochlear nucleus and was significantly reduced in higher auditory structures. A similar finding was also found in the hippocampus. The observed changes might be related to central neuroplasticity (including hyperactivity) as well as neurodegenerative mechanisms and represent central nervous correlates of the age-related decline in auditory processing and perception
Zhang, Jinsheng; Luo, Hao; Pace, Edward; Li, Liang; Liu, Bin
Tinnitus, a ringing in the ear or head without an external sound source, is a prevalent health problem. It is often associated with a number of limbic-associated disorders such as anxiety, sleep disturbance, and emotional distress. Thus, to investigate tinnitus, it is important to consider both auditory and non-auditory brain structures. This paper summarizes the psychophysical, immunocytochemical and electrophysiological evidence found in rats or hamsters with behavioral evidence of tinnitus. Behaviorally, we tested for tinnitus using a conditioned suppression/avoidance paradigm, gap detection acoustic reflex behavioral paradigm, and our newly developed conditioned licking suppression paradigm. Our new tinnitus behavioral paradigm requires relatively short baseline training, examines frequency specification of tinnitus perception, and achieves sensitive tinnitus testing at an individual level. To test for tinnitus-related anxiety and cognitive impairment, we used the elevated plus maze and Morris water maze. Our results showed that not all animals with tinnitus demonstrate anxiety and cognitive impairment. Immunocytochemically, we found that animals with tinnitus manifested increased Fos-like immunoreactivity (FLI) in both auditory and non-auditory structures. The manner in which FLI appeared suggests that lower brainstem structures may be involved in acute tinnitus whereas the midbrain and cortex are involved in more chronic tinnitus. Meanwhile, animals with tinnitus also manifested increased FLI in non-auditory brain structures that are involved in autonomic reactions, stress, arousal and attention. Electrophysiologically, we found that rats with tinnitus developed increased spontaneous firing in the auditory cortex (AC) and amygdala (AMG), as well as intra- and inter-AC and AMG neurosynchrony, which demonstrate that tinnitus may be actively produced and maintained by the interactions between the AC and AMG. Copyright © 2015 Elsevier B.V. All rights reserved.
Aznar, Susana; Klein, Anders Bue
is highly expressed in the prefrontal cortex areas, playing an important role in modulating cortical activity and neural oscillations (brain waves). This makes it an interesting potential pharmacological target for the treatment of neuropsychiatric modes characterized by lack of inhibitory control...
Full Text Available Abstract Background About 25% of schizophrenia patients with auditory hallucinations are refractory to pharmacotherapy and electroconvulsive therapy. We conducted a deep transcranial magnetic stimulation (TMS pilot study in order to evaluate the potential clinical benefit of repeated left temporoparietal cortex stimulation in these patients. The results were encouraging, but a sham-controlled study was needed to rule out a placebo effect. Methods A total of 18 schizophrenic patients with refractory auditory hallucinations were recruited, from Beer Yaakov MHC and other hospitals outpatient populations. Patients received 10 daily treatment sessions with low-frequency (1 Hz for 10 min deep TMS applied over the left temporoparietal cortex, using the H1 coil at the intensity of 110% of the motor threshold. Procedure was either real or sham according to patient randomization. Patients were evaluated via the Auditory Hallucinations Rating Scale, Scale for the Assessment of Positive Symptoms-Negative Symptoms, Clinical Global Impressions, and Quality of Life Questionnaire. Results In all, 10 patients completed the treatment (10 TMS sessions. Auditory hallucination scores of both groups improved; however, there was no statistical difference in any of the scales between the active and the sham treated groups. Conclusions Low-frequency deep TMS to the left temporoparietal cortex using the protocol mentioned above has no statistically significant effect on auditory hallucinations or the other clinical scales measured in schizophrenic patients. Trial Registration Clinicaltrials.gov identifier: NCT00564096.
Jackson, Thomas E; Sandramouli, Soupramanien
Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...
Ghazaleh, Naghmeh; Van der Zwaag, W.; Clarke, Stephanie; Ville, Dimitri Van De; Maire, Raphael; Saenz, Melissa
Animal models of hearing loss and tinnitus observe pathological neural activity in the tonotopic frequency maps of the primary auditory cortex. Here, we applied ultra high-field fMRI at 7 T to test whether human patients with unilateral hearing loss and tinnitus also show altered functional activity
Meyer, Martin; Elmer, Stefan; Baumann, Simon; Jancke, Lutz
In this EEG study we sought to examine the neuronal underpinnings of short-term plasticity as a top-down guided auditory learning process. We hypothesized, that (i) auditory imagery should elicit proper auditory evoked effects (N1/P2 complex) and a late positive component (LPC). Generally, based on recent human brain mapping studies we expected (ii) to observe the involvement of different temporal and parietal lobe areas in imagery and in perception of acoustic stimuli. Furthermore we predicted (iii) that temporal regions show an asymmetric trend due to the different specialization of the temporal lobes in processing speech and non-speech sounds. Finally we sought evidence supporting the notion that short-term training is sufficient to drive top-down activity in brain regions that are not normally recruited by sensory induced bottom up processing. 18 non-musicians partook in a 30 channels based EEG session that investigated spatio-temporal dynamics of auditory imagery of "consonant-vowel" (CV) syllables and piano triads. To control for conditioning effects, we split the volunteers in two matched groups comprising the same conditions (visual, auditory or bimodal stimulation) presented in a slightly different serial order. Furthermore the study presents electromagnetic source localization (LORETA) of perception and imagery of CV- and piano stimuli. Our results imply that auditory imagery elicited similar electrophysiological effects at an early stage (N1/P2) as auditory stimulation. However, we found an additional LPC following the N1/P2 for auditory imagery only. Source estimation evinced bilateral engagement of anterior temporal cortex, which was generally stronger for imagery of music relative to imagery of speech. While we did not observe lateralized activity for the imagery of syllables we noted significantly increased rightward activation over the anterior supratemporal plane for musical imagery. Thus, we conclude that short-term top-down training based
Kaminska, A; Delattre, V; Laschet, J; Dubois, J; Labidurie, M; Duval, A; Manresa, A; Magny, J-F; Hovhannisyan, S; Mokhtari, M; Ouss, L; Boissel, A; Hertz-Pannier, L; Sintsov, M; Minlebaev, M; Khazipov, R; Chiron, C
Characteristic preterm EEG patterns of "Delta-brushes" (DBs) have been reported in the temporal cortex following auditory stimuli, but their spatio-temporal dynamics remains elusive. Using 32-electrode EEG recordings and co-registration of electrodes' position to 3D-MRI of age-matched neonates, we explored the cortical auditory-evoked responses (AERs) after 'click' stimuli in 30 healthy neonates aged 30-38 post-menstrual weeks (PMW). (1) We visually identified auditory-evoked DBs within AERs in all the babies between 30 and 33 PMW and a decreasing response rate afterwards. (2) The AERs showed an increase in EEG power from delta to gamma frequency bands over the middle and posterior temporal regions with higher values in quiet sleep and on the right. (3) Time-frequency and averaging analyses showed that the delta component of DBs, which negatively peaked around 550 and 750 ms over the middle and posterior temporal regions, respectively, was superimposed with fast (alpha-gamma) oscillations and corresponded to the late part of the cortical auditory-evoked potential (CAEP), a feature missed when using classical CAEP processing. As evoked DBs rate and AERs delta to alpha frequency power decreased until full term, auditory-evoked DBs are thus associated with the prenatal development of auditory processing and may suggest an early emerging hemispheric specialization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Suppiej, Agnese; Cai