WorldWideScience

Sample records for auditory neural maturation

  1. Maturation of auditory neural processes in autism spectrum disorder — A longitudinal MEG study

    Directory of Open Access Journals (Sweden)

    Russell G. Port

    2016-01-01

    Conclusions: Children with ASD showed perturbed auditory cortex neural activity, as evidenced by M100 latency delays as well as reduced transient gamma-band activity. Despite evidence for maturation of these responses in ASD, the neural abnormalities in ASD persisted across time. Of note, data from the five children whom demonstrated “optimal outcome” qualitatively suggest that such clinical improvements may be associated with auditory brain responses intermediate between TD and ASD. These “optimal outcome” related results are not statistically significant though, likely due to the low sample size of this cohort, and to be expected as a result of the relatively low proportion of “optimal outcome” in the ASD population. Thus, further investigations with larger cohorts are needed to determine if the above auditory response phenotypes have prognostic utility, predictive of clinical outcome.

  2. Latent iron deficiency at birth influences auditory neural maturation in late preterm and term infants.

    Science.gov (United States)

    Choudhury, Vivek; Amin, Sanjiv B; Agarwal, Asha; Srivastava, L M; Soni, Arun; Saluja, Satish

    2015-11-01

    In utero latent iron deficiency has been associated with abnormal neurodevelopmental outcomes during childhood. Its concomitant effect on auditory neural maturation has not been well studied in late preterm and term infants. The objective was to determine whether in utero iron status is associated with auditory neural maturation in late preterm and term infants. This prospective cohort study was performed at Sir Ganga Ram Hospital, New Delhi, India. Infants with a gestational age ≥34 wk were eligible unless they met the exclusion criteria: craniofacial anomalies, chromosomal disorders, hemolytic disease, multiple gestation, third-trimester maternal infection, chorioamnionitis, toxoplasmosis, other infections, rubella, cytomegalovirus infection, and herpes simplex virus infections (TORCH), Apgar score 75 ng/mL) at birth. Twenty-three infants had latent iron deficiency. Infants with latent iron deficiency had significantly prolonged wave V latencies (7.10 ± 0.68 compared with 6.60 ± 0.66), III-V interpeak latencies (2.37 ± 0.64 compared with 2.07 ± 0.33), and I-V interpeak latencies (5.10 ± 0.57 compared with 4.72 ± 0.56) compared with infants with normal iron status (P neural maturation in infants at ≥34 wk gestational age. This trial was registered at clinicaltrials.gov as NCT02503397. © 2015 American Society for Nutrition.

  3. Maturation of auditory neural processes in autism spectrum disorder - A longitudinal MEG study.

    Science.gov (United States)

    Port, Russell G; Edgar, J Christopher; Ku, Matthew; Bloy, Luke; Murray, Rebecca; Blaskey, Lisa; Levy, Susan E; Roberts, Timothy P L

    2016-01-01

    Individuals with autism spectrum disorder (ASD) show atypical brain activity, perhaps due to delayed maturation. Previous studies examining the maturation of auditory electrophysiological activity have been limited due to their use of cross-sectional designs. The present study took a first step in examining magnetoencephalography (MEG) evidence of abnormal auditory response maturation in ASD via the use of a longitudinal design. Initially recruited for a previous study, 27 children with ASD and nine typically developing (TD) children, aged 6- to 11-years-old, were re-recruited two to five years later. At both timepoints, MEG data were obtained while participants passively listened to sinusoidal pure-tones. Bilateral primary/secondary auditory cortex time domain (100 ms evoked response latency (M100)) and spectrotemporal measures (gamma-band power and inter-trial coherence (ITC)) were examined. MEG measures were also qualitatively examined for five children who exhibited "optimal outcome", participants who were initially on spectrum, but no longer met diagnostic criteria at follow-up. M100 latencies were delayed in ASD versus TD at the initial exam (~ 19 ms) and at follow-up (~ 18 ms). At both exams, M100 latencies were associated with clinical ASD severity. In addition, gamma-band evoked power and ITC were reduced in ASD versus TD. M100 latency and gamma-band maturation rates did not differ between ASD and TD. Of note, the cohort of five children that demonstrated "optimal outcome" additionally exhibited M100 latency and gamma-band activity mean values in-between TD and ASD at both timepoints. Though justifying only qualitative interpretation, these "optimal outcome" related data are presented here to motivate future studies. Children with ASD showed perturbed auditory cortex neural activity, as evidenced by M100 latency delays as well as reduced transient gamma-band activity. Despite evidence for maturation of these responses in ASD, the neural abnormalities

  4. Transcriptional maturation of the mouse auditory forebrain.

    Science.gov (United States)

    Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly

    2015-08-14

    patterns were tightly clustered by postnatal age and brain region; (2) comparing A1 and MG, the total numbers of differentially expressed genes were comparable from P7 to P21, then dropped to nearly half by adulthood; (3) comparing successive age groups, the greatest numbers of differentially expressed genes were found between P7 and P14 in both regions, followed by a steady decline in numbers with age; (4) maturational trajectories in expression levels varied at the single gene level (increasing, decreasing, static, other); (5) between regions, the profiles of single genes were often asymmetric; (6) GSEA revealed that genesets related to neural activity and plasticity were typically upregulated from P7 to adult, while those related to structure tended to be downregulated; (7) GSEA and pathways analysis of selected functional networks were not predictive of expression patterns in the auditory forebrain for all genes, reflecting regional specificity at the single gene level. Gene expression in the auditory forebrain during postnatal development is in constant flux and becomes increasingly stable with age. Maturational changes are evident at the global through single gene levels. Transcriptome profiles in A1 and MG are distinct at all ages, and differ from other brain regions. The database generated by this study provides a rich foundation for the identification of novel developmental biomarkers, functional gene pathways, and targeted studies of postnatal maturation in the auditory forebrain.

  5. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  6. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Neural oscillations in auditory working memory

    OpenAIRE

    Wilsch, A.

    2015-01-01

    The present thesis investigated memory load and memory decay in auditory working memory. Alpha power as a marker for memory load served as the primary indicator for load and decay fluctuations hypothetically reflecting functional inhibition of irrelevant information. Memory load was induced by presenting auditory signals (syllables and pure-tone sequences) in noise because speech-in-noise has been shown before to increase memory load. The aim of the thesis was to assess with magnetoencephalog...

  8. What works in auditory working memory? A neural oscillations perspective.

    Science.gov (United States)

    Wilsch, Anna; Obleser, Jonas

    2016-06-01

    Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  9. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  10. Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.

    Science.gov (United States)

    Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas

    2015-12-09

    Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory

  11. Maturation of the auditory t-complex brain response across adolescence.

    Science.gov (United States)

    Mahajan, Yatin; McArthur, Genevieve

    2013-02-01

    Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  12. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Music training relates to the development of neural mechanisms of selective auditory attention.

    Science.gov (United States)

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  16. Location coding by opponent neural populations in the auditory cortex.

    Directory of Open Access Journals (Sweden)

    G Christopher Stecker

    2005-03-01

    Full Text Available Although the auditory cortex plays a necessary role in sound localization, physiological investigations in the cortex reveal inhomogeneous sampling of auditory space that is difficult to reconcile with localization behavior under the assumption of local spatial coding. Most neurons respond maximally to sounds located far to the left or right side, with few neurons tuned to the frontal midline. Paradoxically, psychophysical studies show optimal spatial acuity across the frontal midline. In this paper, we revisit the problem of inhomogeneous spatial sampling in three fields of cat auditory cortex. In each field, we confirm that neural responses tend to be greatest for lateral positions, but show the greatest modulation for near-midline source locations. Moreover, identification of source locations based on cortical responses shows sharp discrimination of left from right but relatively inaccurate discrimination of locations within each half of space. Motivated by these findings, we explore an opponent-process theory in which sound-source locations are represented by differences in the activity of two broadly tuned channels formed by contra- and ipsilaterally preferring neurons. Finally, we demonstrate a simple model, based on spike-count differences across cortical populations, that provides bias-free, level-invariant localization-and thus also a solution to the "binding problem" of associating spatial information with other nonspatial attributes of sounds.

  17. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  18. Neural effects of cognitive control load on auditory selective attention.

    Science.gov (United States)

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali

    2014-08-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  20. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  1. Fluoxetine pretreatment promotes neuronal survival and maturation after auditory fear conditioning in the rat amygdala.

    Directory of Open Access Journals (Sweden)

    Lizhu Jiang

    Full Text Available The amygdala is a critical brain region for auditory fear conditioning, which is a stressful condition for experimental rats. Adult neurogenesis in the dentate gyrus (DG of the hippocampus, known to be sensitive to behavioral stress and treatment of the antidepressant fluoxetine (FLX, is involved in the formation of hippocampus-dependent memories. Here, we investigated whether neurogenesis also occurs in the amygdala and contributes to auditory fear memory. In rats showing persistent auditory fear memory following fear conditioning, we found that the survival of new-born cells and the number of new-born cells that differentiated into mature neurons labeled by BrdU and NeuN decreased in the amygdala, but the number of cells that developed into astrocytes labeled by BrdU and GFAP increased. Chronic pretreatment with FLX partially rescued the reduction in neurogenesis in the amygdala and slightly suppressed the maintenance of the long-lasting auditory fear memory 30 days after the fear conditioning. The present results suggest that adult neurogenesis in the amygdala is sensitive to antidepressant treatment and may weaken long-lasting auditory fear memory.

  2. The pattern of auditory brainstem response wave V maturation in cochlear-implanted children.

    Science.gov (United States)

    Thai-Van, Hung; Cozma, Sebastian; Boutitie, Florent; Disant, François; Truy, Eric; Collet, Lionel

    2007-03-01

    Maturation of acoustically evoked brainstem responses (ABR) in hearing children is not complete at birth but rather continues over the first two years of life. In particular, it has been established that the decrease in ABR wave V latency can be modeled as the sum of two decaying exponential functions with respective time-constants of 4 and 50 weeks [Eggermont, J.J., Salamy, A., 1988a. Maturational time-course for the ABR in preterm and full term infants. Hear Res 33, 35-47; Eggermont, J.J., Salamy, A., 1988b. Development of ABR parameters in a preterm and a term born population. Ear Hear 9, 283-9]. Here, we investigated the maturation of electrically evoked auditory brainstem responses (EABR) in 55 deaf children who recovered hearing after cochlear implantation, and proposed a predictive model of EABR maturation depending on the onset of deafness. The pattern of EABR maturation over the first 2 years of cochlear implant use was compared with the normal pattern of ABR maturation in hearing children. Changes in EABR wave V latency over the 2 years following cochlear implant connection were analyzed in two groups of children. The first group (n=41) consisted of children with early-onset of deafness (mostly congenital), and the second (n=14) of children who had become profoundly deaf after 1 year of age. The modeling of changes in EABR wave V latency with time was based on the mean values from each of the two groups, allowing comparison of the rates of EABR maturation between groups. Differences between EABRs elicited at the basal and apical ends of the implant electrode array were also tested. There was no influence of age at implantation on the rate of wave V latency change. The main factor for EABR changes was the time in sound. Indeed, significant maturation was observed over the first 2 years of implant use only in the group with early-onset deafness. In this group maturation of wave V progressed as in the ABR model of [Eggermont, J.J., Salamy, A., 1988a

  3. From sensation to percept: the neural signature of auditory event-related potentials.

    Science.gov (United States)

    Joos, Kathleen; Gilles, Annick; Van de Heyning, Paul; De Ridder, Dirk; Vanneste, Sven

    2014-05-01

    An external auditory stimulus induces an auditory sensation which may lead to a conscious auditory perception. Although the sensory aspect is well known, it is still a question how an auditory stimulus results in an individual's conscious percept. To unravel the uncertainties concerning the neural correlates of a conscious auditory percept, event-related potentials may serve as a useful tool. In the current review we mainly wanted to shed light on the perceptual aspects of auditory processing and therefore we mainly focused on the auditory late-latency responses. Moreover, there is increasing evidence that perception is an active process in which the brain searches for the information it expects to be present, suggesting that auditory perception requires the presence of both bottom-up, i.e. sensory and top-down, i.e. prediction-driven processing. Therefore, the auditory evoked potentials will be interpreted in the context of the Bayesian brain model, in which the brain predicts which information it expects and when this will happen. The internal representation of the auditory environment will be verified by sensation samples of the environment (P50, N100). When this incoming information violates the expectation, it will induce the emission of a prediction error signal (Mismatch Negativity), activating higher-order neural networks and inducing the update of prior internal representations of the environment (P300). Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Noise-driven manifestation of learning in mature neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar

    2002-01-01

    We show that the generalization capability of a mature thresholding neural network to process above-threshold disturbances in a noise-free environment is extended to subthreshold disturbances by ambient noise without retraining. The ability to benefit from noise is intrinsic and does not have to be learned separately. Nonlinear dependence of sensitivity with noise strength is significantly narrower than in individual threshold systems. Noise has a minimal effect on network performance for above-threshold signals. We resolve two seemingly contradictory responses of trained networks to noise--their ability to benefit from its presence and their robustness against noisy strong disturbances

  5. The maturational process of the auditory system in the first year of life characterized by brainstem auditory evoked potentials

    Directory of Open Access Journals (Sweden)

    Raquel Beltrão Amorim

    2009-01-01

    Full Text Available The study of brainstem auditory evoked potentials (BAEP allows obtaining the electrophysiological activity generated in the cochlear nerve to the inferior colliculus. In the first months of life, a period of greater neuronal plasticity, important changes are observed in the absolute latency and inter-peak intervals of BAEP, which occur up to the completion of the maturational process, around 18 months of life in full-term newborns, when the response is similar to that of adults. OBJECTIVE: The goal of this study was to establish normal values of absolute latencies for waves I, III and V and inter-peak intervals I-III, III-V and I-V of the BAEP performed in full-term infants attending the Infant Hearing Health Program of the Speech-Language Pathology and Audiology Course at Bauru School of Dentistry, Brazil, with no risk history for hearing impairment. MATERIAL AND METHODS: The stimulation parameters were: rarefaction click stimulus presented by the 3ª insertion phone, intensity of 80 dBnHL and a rate of 21.1 c/s, band-pass filter of 30 and 3,000 Hz and average of 2,000 stimuli. A sample of 86 infants was first divided according to their gestational age in preterm (n=12 and full-term (n=74, and then according to their chronological age in three periods: P1: 0 to 29 days (n=46, P2: 30 days to 5 months 29 days (n=28 and P3: above 6 months (n= 12. RESULTS: The absolute latency of wave I was similar to that of adults, generally in the 1st month of life, demonstrating a complete process maturity of the auditory nerve. For waves III and V, there was a gradual decrease of absolute latencies with age, characterizing the maturation of axons and synaptic mechanisms in the brainstem level. CONCLUSION: Age proved to be a determining factor in the absolute latency of the BAEP components, especially those generated in the brainstem, in the first year of life.

  6. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  7. Effects of prematurity on language acquisition and auditory maturation: a systematic review.

    Science.gov (United States)

    Rechia, Inaê Costa; Oliveira, Luciéle Dias; Crestani, Anelise Henrich; Biaggio, Eliara Pinto Vieira; Souza, Ana Paula Ramos de

    2016-01-01

    To verify which damages prematurity causes to hearing and language. We used the decriptors language/linguagem, hearing/audição, prematurity/prematuridade in databases LILACS, MEDLINE, Cochrane Library and Scielo. randomized controlled trials, non-randomized intervention studies and descriptive studies (cross-sectional, cohort, case-control projects). The articles were assessed independently by two authors according to the selection criteria. Twenty-six studies were selected, of which seven were published in Brazil and 19 in international literature. Nineteen studies comparing full-term and preterm infants. Two of the studies made comparisons between premature infants small for gestational age and appropriate for gestational age. In four studies, the sample consisted of children with extreme prematurity, while other studies have been conducted in children with severe and moderate prematurity. To assess hearing, these studies used otoacoustic emissions, brainstem evoked potentials, tympanometry, auditory steady-state response and visual reinforcement audiometry. For language assessment, most of the articles used the Bayley Scale of Infant and Toddler Development. Most studies reviewed observed that prematurity is directly or indirectly related to the acquisition of auditory and language abilities early in life. Thus, it could be seen that prematurity, as well as aspects related to it (gestational age, low weight at birth and complications at birth), affect maturation of the central auditory pathway and may cause negative effects on language acquisition.

  8. Maturation of long latency auditory evoked potentials in hearing children: systematic review.

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Magliaro, Fernanda Cristina Leite; Carvalho, Ana Claudia Martinho de; Matas, Carla Gentile

    2017-05-15

    To analyze how Auditory Long Latency Evoked Potentials (LLAEP) change according to age in children population through a systematic literature review. After formulation of the research question, a bibliographic survey was done in five data bases with the following descriptors: Electrophysiology (Eletrofisiologia), Auditory Evoked Potentials (Potenciais Evocados Auditivos), Child (Criança), Neuronal Plasticity (Plasticidade Neuronal) and Audiology (Audiologia). Level 1 evidence articles, published between 1995 and 2015 in Brazilian Portuguese or English language. Aspects related to emergence, morphology and latency of P1, N1, P2 and N2 components were analyzed. A total of 388 studies were found; however, only 21 studies contemplated the established criteria. P1 component is characterized as the most frequent component in young children, being observed around 100-150 ms, which tends to decrease as chronological age increases. The N2 component was shown to be the second most commonly observed component in children, being observed around 200-250 ms.. The other N1 and P2 components are less frequent and begin to be seen and recorded throughout the maturational process. The maturation of LLAEP occurs gradually, and the emergence of P1, N1, P2 and N2 components as well as their latency values are variable in childhood. P1 and N2 components are the most observed and described in pediatric population. The diversity of protocols makes the comparison between studies difficult.

  9. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    Science.gov (United States)

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  10. Neural plasticity expressed in central auditory structures with and without tinnitus

    Directory of Open Access Journals (Sweden)

    Larry E Roberts

    2012-05-01

    Full Text Available Sensory training therapies for tinnitus are based on the assumption that, notwithstanding neural changes related to tinnitus, auditory training can alter the response properties of neurons in auditory pathways. To address this question, we investigated whether brain changes induced by sensory training in tinnitus sufferers and measured by EEG are similar to those induced in age and hearing loss matched individuals without tinnitus trained on the same auditory task. Auditory training was given using a 5 kHz 40-Hz amplitude-modulated sound that was in the tinnitus frequency region of the tinnitus subjects and enabled extraction of the 40-Hz auditory steady-state response (ASSR and P2 transient response known to localize to primary and nonprimary auditory cortex, respectively. P2 amplitude increased with training equally in participants with tinnitus and in control subjects, suggesting normal remodeling of nonprimary auditory regions in tinnitus. However, training-induced changes in the ASSR differed between the tinnitus and control groups. In controls ASSR phase advanced toward the stimulus waveform by about ten degrees over training, in agreement with previous results obtained in young normal hearing individuals. However, ASSR phase did not change significantly with training in the tinnitus group, although some participants showed phase shifts resembling controls. On the other hand, ASSR amplitude increased with training in the tinnitus group, whereas in controls this response (which is difficult to remodel in young normal hearing subjects did not change with training. These results suggest that neural changes related to tinnitus altered how neural plasticity was expressed in the region of primary but not nonprimary auditory cortex. Auditory training did not reduce tinnitus loudness although a small effect on the tinnitus spectrum was detected.

  11. A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.

    Science.gov (United States)

    von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H

    2016-10-26

    The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability

  12. Speaking Two Languages Enhances an Auditory but Not a Visual Neural Marker of Cognitive Inhibition

    Directory of Open Access Journals (Sweden)

    Mercedes Fernandez

    2014-09-01

    Full Text Available The purpose of the present study was to replicate and extend our original findings of enhanced neural inhibitory control in bilinguals. We compared English monolinguals to Spanish/English bilinguals on a non-linguistic, auditory Go/NoGo task while recording event-related brain potentials. New to this study was the visual Go/NoGo task, which we included to investigate whether enhanced neural inhibition in bilinguals extends from the auditory to the visual modality. Results confirmed our original findings and revealed greater inhibition in bilinguals compared to monolinguals. As predicted, compared to monolinguals, bilinguals showed increased N2 amplitude during the auditory NoGo trials, which required inhibitory control, but no differences during the Go trials, which required a behavioral response and no inhibition. Interestingly, during the visual Go/NoGo task, event related brain potentials did not distinguish the two groups, and behavioral responses were similar between the groups regardless of task modality. Thus, only auditory trials that required inhibitory control revealed between-group differences indicative of greater neural inhibition in bilinguals. These results show that experience-dependent neural changes associated with bilingualism are specific to the auditory modality and that the N2 event-related brain potential is a sensitive marker of this plasticity.

  13. Neural Correlates of Realistic and Unrealistic Auditory Space Perception

    Directory of Open Access Journals (Sweden)

    Akiko Callan

    2011-10-01

    Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.

  14. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation.

    Science.gov (United States)

    Sameiro-Barbosa, Catia M; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.

  15. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation

    Science.gov (United States)

    Sameiro-Barbosa, Catia M.; Geiser, Eveline

    2016-01-01

    The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306

  16. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  17. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  18. Eps8 regulates hair bundle length and functional maturation of mammalian auditory hair cells.

    Directory of Open Access Journals (Sweden)

    Valeria Zampini

    2011-04-01

    Full Text Available Hair cells of the mammalian cochlea are specialized for the dynamic coding of sound stimuli. The transduction of sound waves into electrical signals depends upon mechanosensitive hair bundles that project from the cell's apical surface. Each stereocilium within a hair bundle is composed of uniformly polarized and tightly packed actin filaments. Several stereociliary proteins have been shown to be associated with hair bundle development and function and are known to cause deafness in mice and humans when mutated. The growth of the stereociliar actin core is dynamically regulated at the actin filament barbed ends in the stereociliary tip. We show that Eps8, a protein with actin binding, bundling, and barbed-end capping activities in other systems, is a novel component of the hair bundle. Eps8 is localized predominantly at the tip of the stereocilia and is essential for their normal elongation and function. Moreover, we have found that Eps8 knockout mice are profoundly deaf and that IHCs, but not OHCs, fail to mature into fully functional sensory receptors. We propose that Eps8 directly regulates stereocilia growth in hair cells and also plays a crucial role in the physiological maturation of mammalian cochlear IHCs. Together, our results indicate that Eps8 is critical in coordinating the development and functionality of mammalian auditory hair cells.

  19. Eps8 regulates hair bundle length and functional maturation of mammalian auditory hair cells.

    Science.gov (United States)

    Zampini, Valeria; Rüttiger, Lukas; Johnson, Stuart L; Franz, Christoph; Furness, David N; Waldhaus, Jörg; Xiong, Hao; Hackney, Carole M; Holley, Matthew C; Offenhauser, Nina; Di Fiore, Pier Paolo; Knipper, Marlies; Masetto, Sergio; Marcotti, Walter

    2011-04-01

    Hair cells of the mammalian cochlea are specialized for the dynamic coding of sound stimuli. The transduction of sound waves into electrical signals depends upon mechanosensitive hair bundles that project from the cell's apical surface. Each stereocilium within a hair bundle is composed of uniformly polarized and tightly packed actin filaments. Several stereociliary proteins have been shown to be associated with hair bundle development and function and are known to cause deafness in mice and humans when mutated. The growth of the stereociliar actin core is dynamically regulated at the actin filament barbed ends in the stereociliary tip. We show that Eps8, a protein with actin binding, bundling, and barbed-end capping activities in other systems, is a novel component of the hair bundle. Eps8 is localized predominantly at the tip of the stereocilia and is essential for their normal elongation and function. Moreover, we have found that Eps8 knockout mice are profoundly deaf and that IHCs, but not OHCs, fail to mature into fully functional sensory receptors. We propose that Eps8 directly regulates stereocilia growth in hair cells and also plays a crucial role in the physiological maturation of mammalian cochlear IHCs. Together, our results indicate that Eps8 is critical in coordinating the development and functionality of mammalian auditory hair cells.

  20. Dissociation of the Neural Correlates of Visual and Auditory Contextual Encoding

    Science.gov (United States)

    Gottlieb, Lauren J.; Uncapher, Melina R.; Rugg, Michael D.

    2010-01-01

    The present study contrasted the neural correlates of encoding item-context associations according to whether the contextual information was visual or auditory. Subjects (N = 20) underwent fMRI scanning while studying a series of visually presented pictures, each of which co-occurred with either a visually or an auditorily presented name. The task…

  1. Bird brains and songs : Neural mechanisms of auditory memory and perception in zebra finches

    NARCIS (Netherlands)

    Gobes, S.M.H.|info:eu-repo/dai/nl/304832669

    2009-01-01

    Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory

  2. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  3. Neural dynamics underlying attentional orienting to auditory representations in short-term memory.

    Science.gov (United States)

    Backer, Kristina C; Binns, Malcolm A; Alain, Claude

    2015-01-21

    Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.

  4. Neural responses to complex auditory rhythms: the role of attending

    Directory of Open Access Journals (Sweden)

    Heather L Chapin

    2010-12-01

    Full Text Available The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus.

  5. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  6. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    Science.gov (United States)

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  7. Auditory maturation in premature infants: a potential pitfall for early cochlear implantation.

    Science.gov (United States)

    Hof, Janny R; Stokroos, Robert J; Wix, Eduard; Chenault, Mickey; Gelders, Els; Brokx, Jan

    2013-08-01

    To describe spontaneous hearing improvement in the first years of life of a number of preterm neonates relative to cochlear implant candidacy. Retrospective case study. Hearing levels of 14 preterm neonates (mean gestational age at birth = 29 weeks) referred after newborn hearing screening were evaluated. Initial hearing thresholds ranged from 40 to 105 dBHL (mean = 85 dBHL). Hearing level improved to normal levels for four neonates and to moderate levels for five, whereas for five neonates, no improvement in hearing thresholds was observed and cochlear implantation was recommended. Three of the four neonates in whom the hearing improved to normal levels were born prior to 28 weeks gestational age. Hearing improvement was mainly observed prior to a gestational age of 80 weeks. Delayed maturation of an immature auditory pathway might be an important reason for referral after newborn hearing screening in premature infants. Caution is advised regarding early cochlear implantation in preterm born infants. Audiological follow-ups until at least 80 weeks gestational age are therefore recommended. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Neural correlates of auditory short-term memory in rostral superior temporal cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2014-12-01

    Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Neural Activity During The Formation Of A Giant Auditory Synapse

    NARCIS (Netherlands)

    M.C. Sierksma (Martijn)

    2018-01-01

    markdownabstractThe formation of synapses is a critical step in the development of the brain. During this developmental stage neural activity propagates across the brain from synapse to synapse. This activity is thought to instruct the precise, topological connectivity found in the sensory central

  10. A novel method for extraction of neural response from single channel cochlear implant auditory evoked potentials.

    Science.gov (United States)

    Sinkiewicz, Daniel; Friesen, Lendra; Ghoraani, Behnaz

    2017-02-01

    Cortical auditory evoked potentials (CAEP) are used to evaluate cochlear implant (CI) patient auditory pathways, but the CI device produces an electrical artifact, which obscures the relevant information in the neural response. Currently there are multiple methods, which attempt to recover the neural response from the contaminated CAEP, but there is no gold standard, which can quantitatively confirm the effectiveness of these methods. To address this crucial shortcoming, we develop a wavelet-based method to quantify the amount of artifact energy in the neural response. In addition, a novel technique for extracting the neural response from single channel CAEPs is proposed. The new method uses matching pursuit (MP) based feature extraction to represent the contaminated CAEP in a feature space, and support vector machines (SVM) to classify the components as normal hearing (NH) or artifact. The NH components are combined to recover the neural response without artifact energy, as verified using the evaluation tool. Although it needs some further evaluation, this approach is a promising method of electrical artifact removal from CAEPs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Exploring the relationship between cortical GABA concentrations, auditory gamma-band responses and development in ASD: Evidence for an altered maturational trajectory in ASD.

    Science.gov (United States)

    Port, Russell G; Gaetz, William; Bloy, Luke; Wang, Dah-Jyuu; Blaskey, Lisa; Kuschner, Emily S; Levy, Susan E; Brodkin, Edward S; Roberts, Timothy P L

    2017-04-01

    Autism spectrum disorder (ASD) is hypothesized to arise from imbalances between excitatory and inhibitory neurotransmission (E/I imbalance). Studies have demonstrated E/I imbalance in individuals with ASD and also corresponding rodent models. One neural process thought to be reliant on E/I balance is gamma-band activity (Gamma), with support arising from observed correlations between motor, as well as visual, Gamma and underlying GABA concentrations in healthy adults. Additionally, decreased Gamma has been observed in ASD individuals and relevant animal models, though the direct relationship between Gamma and GABA concentrations in ASD remains unexplored. This study combined magnetoencephalography (MEG) and edited magnetic resonance spectroscopy (MRS) in 27 typically developing individuals (TD) and 30 individuals with ASD. Auditory cortex localized phase-locked Gamma was compared to resting Superior Temporal Gyrus relative cortical GABA concentrations for both children/adolescents and adults. Children/adolescents with ASD exhibited significantly decreased GABA+/Creatine (Cr) levels, though typical Gamma. Additionally, these children/adolescents lacked the typical maturation of GABA+/Cr concentrations and gamma-band coherence. Furthermore, children/adolescents with ASD additionally failed to exhibit the typical GABA+/Cr to gamma-band coherence association. This altered coupling during childhood/adolescence may result in Gamma decreases observed in the adults with ASD. Therefore, individuals with ASD exhibit improper local neuronal circuitry maturation during a childhood/adolescence critical period, when GABA is involved in configuring of such circuit functioning. Provocatively a novel line of treatment is suggested (with a critical time window); by increasing neural GABA levels in children/adolescents with ASD, proper local circuitry maturation may be restored resulting in typical Gamma in adulthood. Autism Res 2017, 10: 593-607. © 2016 International Society for

  12. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria

    2016-09-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." © The Author 2016. Published by Oxford University Press.

  13. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  14. Neural correlates of auditory recognition memory in primate lateral prefrontal cortex.

    Science.gov (United States)

    Plakke, B; Ng, C-W; Poremba, A

    2013-08-06

    The neural underpinnings of working and recognition memory have traditionally been studied in the visual domain and these studies pinpoint the lateral prefrontal cortex (lPFC) as a primary region for visual memory processing (Miller et al., 1996; Ranganath et al., 2004; Kennerley and Wallis, 2009). Herein, we utilize single-unit recordings for the same region in monkeys (Macaca mulatta) but investigate a second modality examining auditory working and recognition memory during delayed matching-to-sample (DMS) performance. A large portion of neurons in the dorsal and ventral banks of the principal sulcus (area 46, 46/9) show DMS event-related activity to one or more of the following task events: auditory cues, memory delay, decision wait time, response, and/or reward portions. Approximately 50% of the neurons show evidence of auditory-evoked activity during the task and population activity demonstrated encoding of recognition memory in the form of match enhancement. However, neither robust nor sustained delay activity was observed. The neuronal responses during the auditory DMS task are similar in many respects to those found within the visual working memory domain, which supports the hypothesis that the lPFC, particularly area 46, functionally represents key pieces of information for recognition memory inclusive of decision-making, but regardless of modality. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Early life stress accelerates behavioral and neural maturation of the hippocampus in male mice.

    Science.gov (United States)

    Bath, K; Manzano-Nieves, G; Goodwill, H

    2016-06-01

    Early life stress (ELS) increases the risk for later cognitive and emotional dysfunction. ELS is known to truncate neural development through effects on suppressing cell birth, increasing cell death, and altering neuronal morphology, effects that have been associated with behavioral profiles indicative of precocious maturation. However, how earlier silencing of growth drives accelerated behavioral maturation has remained puzzling. Here, we test the novel hypothesis that, ELS drives a switch from growth to maturation to accelerate neural and behavioral development. To test this, we used a mouse model of ELS, fragmented maternal care, and a cross-sectional dense sampling approach focusing on hippocampus and measured effects of ELS on the ontogeny of behavioral development and biomarkers of neural maturation. Consistent with previous work, ELS was associated with an earlier developmental decline in expression of markers of cell proliferation (Ki-67) and differentiation (doublecortin). However, ELS also led to a precocious arrival of Parvalbumin-positive cells, led to an earlier switch in NMDA receptor subunit expression (marker of synaptic maturity), and was associated with an earlier rise in myelin basic protein expression (key component of the myelin sheath). In addition, in a contextual fear-conditioning task, ELS accelerated the timed developmental suppression of contextual fear. Together, these data provide support for the hypothesis that ELS serves to switch neurodevelopment from processes of growth to maturation and promotes accelerated development of some forms of emotional learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The neural correlates of coloured music: a functional MRI investigation of auditory-visual synaesthesia.

    Science.gov (United States)

    Neufeld, J; Sinke, C; Dillo, W; Emrich, H M; Szycik, G R; Dima, D; Bleich, S; Zedler, M

    2012-01-01

    In auditory-visual synaesthesia, all kinds of sound can induce additional visual experiences. To identify the brain regions mainly involved in this form of synaesthesia, functional magnetic resonance imaging (fMRI) has been used during non-linguistic sound perception (chords and pure tones) in synaesthetes and non-synaesthetes. Synaesthetes showed increased activation in the left inferior parietal cortex (IPC), an area involved in multimodal integration, feature binding and attention guidance. No significant group-differences could be detected in area V4, which is known to be related to colour vision and form processing. The results support the idea of the parietal cortex acting as sensory nexus area in auditory-visual synaesthesia, and as a common neural correlate for different types of synaesthesia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Influence and timing of arrival of murine neural crest on pancreatic beta cell development and maturation.

    Science.gov (United States)

    Plank, Jennifer L; Mundell, Nathan A; Frist, Audrey Y; LeGrone, Alison W; Kim, Thomas; Musser, Melissa A; Walter, Teagan J; Labosky, Patricia A

    2011-01-15

    Interactions between cells from the ectoderm and mesoderm influence development of the endodermally-derived pancreas. While much is known about how mesoderm regulates pancreatic development, relatively little is understood about how and when the ectodermally-derived neural crest regulates pancreatic development and specifically, beta cell maturation. A previous study demonstrated that signals from the neural crest regulate beta cell proliferation and ultimately, beta cell mass. Here, we expand on that work to describe timing of neural crest arrival at the developing pancreatic bud and extend our knowledge of the non-cell autonomous role for neural crest derivatives in the process of beta cell maturation. We demonstrated that murine neural crest entered the pancreatic mesenchyme between the 26 and 27 somite stages (approximately 10.0 dpc) and became intermingled with pancreatic progenitors as the epithelium branched into the surrounding mesenchyme. Using a neural crest-specific deletion of the Forkhead transcription factor Foxd3, we ablated neural crest cells that migrate to the pancreatic primordium. Consistent with previous data, in the absence of Foxd3, and therefore the absence of neural crest cells, proliferation of insulin-expressing cells and insulin-positive area are increased. Analysis of endocrine cell gene expression in the absence of neural crest demonstrated that, although the number of insulin-expressing cells was increased, beta cell maturation was significantly impaired. Decreased MafA and Pdx1 expression illustrated the defect in beta cell maturation; we discovered that without neural crest, there was a reduction in the percentage of insulin-positive cells that co-expressed Glut2 and Pdx1 compared to controls. In addition, transmission electron microscopy analyses revealed decreased numbers of characteristic insulin granules and the presence of abnormal granules in insulin-expressing cells from mutant embryos. Together, these data demonstrate that

  18. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern

    2005-01-01

    and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....

  19. The Neural Border: Induction, Specification and Maturation of the territory that generates Neural Crest cells.

    Science.gov (United States)

    Pla, Patrick; Monsoro-Burq, Anne H

    2018-05-28

    The neural crest is induced at the edge between the neural plate and the nonneural ectoderm, in an area called the neural (plate) border, during gastrulation and neurulation. In recent years, many studies have explored how this domain is patterned, and how the neural crest is induced within this territory, that also participates to the prospective dorsal neural tube, the dorsalmost nonneural ectoderm, as well as placode derivatives in the anterior area. This review highlights the tissue interactions, the cell-cell signaling and the molecular mechanisms involved in this dynamic spatiotemporal patterning, resulting in the induction of the premigratory neural crest. Collectively, these studies allow building a complex neural border and early neural crest gene regulatory network, mostly composed by transcriptional regulations but also, more recently, including novel signaling interactions. Copyright © 2018. Published by Elsevier Inc.

  20. Using Dual Process Models to Examine Impulsivity Throughout Neural Maturation.

    Science.gov (United States)

    Leshem, Rotem

    2016-01-01

    The multivariate construct of impulsivity is examined through neural systems and connections that comprise the executive functioning system. It is proposed that cognitive and behavioral components of impulsivity can be divided into two distinct groups, mediated by (1) the cognitive control system: deficits in top-down cognitive control processes referred to as action/cognitive impulsivity and (2) the socioemotional system: related to bottom-up affective/motivational processes referred to as affective impulsivity. Examination of impulsivity from a developmental viewpoint can guide future research, potentially enabling the selection of more effective interventions for impulsive individuals, based on the cognitive components requiring improvement.

  1. Bottom-up driven involuntary auditory evoked field change: constant sound sequencing amplifies but does not sharpen neural activity.

    Science.gov (United States)

    Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo

    2010-01-01

    The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.

  2. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Directory of Open Access Journals (Sweden)

    Liliane Aparecida Fagundes Silva

    2015-01-01

    Full Text Available The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP; speech perception tests of the Glendonald Auditory Screening Procedure (GASP; Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS; and Meaningful Use of Speech Scales (MUSS. The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms. In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms. The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI.

  3. Auditory Cortical Maturation in a Child with Cochlear Implant: Analysis of Electrophysiological and Behavioral Measures

    Science.gov (United States)

    Silva, Liliane Aparecida Fagundes; Couto, Maria Inês Vieira; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; de Carvalho, Ana Claudia Martinho; Matas, Carla Gentile

    2015-01-01

    The purpose of this study was to longitudinally assess the behavioral and electrophysiological hearing changes of a girl inserted in a CI program, who had bilateral profound sensorineural hearing loss and underwent surgery of cochlear implantation with electrode activation at 21 months of age. She was evaluated using the P1 component of Long Latency Auditory Evoked Potential (LLAEP); speech perception tests of the Glendonald Auditory Screening Procedure (GASP); Infant Toddler Meaningful Auditory Integration Scale (IT-MAIS); and Meaningful Use of Speech Scales (MUSS). The study was conducted prior to activation and after three, nine, and 18 months of cochlear implant activation. The results of the LLAEP were compared with data from a hearing child matched by gender and chronological age. The results of the LLAEP of the child with cochlear implant showed gradual decrease in latency of the P1 component after auditory stimulation (172 ms–134 ms). In the GASP, IT-MAIS, and MUSS, gradual development of listening skills and oral language was observed. The values of the LLAEP of the hearing child were expected for chronological age (132 ms–128 ms). The use of different clinical instruments allow a better understanding of the auditory habilitation and rehabilitation process via CI. PMID:26881163

  4. Neural correlates of accelerated auditory processing in children engaged in music training.

    Science.gov (United States)

    Habibi, Assal; Cahn, B Rael; Damasio, Antonio; Damasio, Hanna

    2016-10-01

    Several studies comparing adult musicians and non-musicians have shown that music training is associated with brain differences. It is unknown, however, whether these differences result from lengthy musical training, from pre-existing biological traits, or from social factors favoring musicality. As part of an ongoing 5-year longitudinal study, we investigated the effects of a music training program on the auditory development of children, over the course of two years, beginning at age 6-7. The training was group-based and inspired by El-Sistema. We compared the children in the music group with two comparison groups of children of the same socio-economic background, one involved in sports training, another not involved in any systematic training. Prior to participating, children who began training in music did not differ from those in the comparison groups in any of the assessed measures. After two years, we now observe that children in the music group, but not in the two comparison groups, show an enhanced ability to detect changes in tonal environment and an accelerated maturity of auditory processing as measured by cortical auditory evoked potentials to musical notes. Our results suggest that music training may result in stimulus specific brain changes in school aged children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. A neural network model of normal and abnormal auditory information processing.

    Science.gov (United States)

    Du, X; Jansen, B H

    2011-08-01

    The ability of the brain to attenuate the response to irrelevant sensory stimulation is referred to as sensory gating. A gating deficiency has been reported in schizophrenia. To study the neural mechanisms underlying sensory gating, a neuroanatomically inspired model of auditory information processing has been developed. The mathematical model consists of lumped parameter modules representing the thalamus (TH), the thalamic reticular nucleus (TRN), auditory cortex (AC), and prefrontal cortex (PC). It was found that the membrane potential of the pyramidal cells in the PC module replicated auditory evoked potentials, recorded from the scalp of healthy individuals, in response to pure tones. Also, the model produced substantial attenuation of the response to the second of a pair of identical stimuli, just as seen in actual human experiments. We also tested the viewpoint that schizophrenia is associated with a deficit in prefrontal dopamine (DA) activity, which would lower the excitatory and inhibitory feedback gains in the AC and PC modules. Lowering these gains by less than 10% resulted in model behavior resembling the brain activity seen in schizophrenia patients, and replicated the reported gating deficits. The model suggests that the TRN plays a critical role in sensory gating, with the smaller response to a second tone arising from a reduction in inhibition of TH by the TRN. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Neural correlates of strategy use during auditory working memory in musicians and non-musicians.

    Science.gov (United States)

    Schulze, K; Mueller, K; Koelsch, S

    2011-01-01

    Working memory (WM) performance in humans can be improved by structuring and organizing the material to be remembered. For visual and verbal information, this process of structuring has been associated with the involvement of a prefrontal-parietal network, but for non-verbal auditory material, the brain areas that facilitate WM for structured information have remained elusive. Using functional magnetic resonance imaging, this study compared neural correlates underlying encoding and rehearsal of auditory WM for structured and unstructured material. Musicians and non-musicians performed a WM task on five-tone sequences that were either tonally structured (with all tones belonging to one tonal key) or tonally unstructured (atonal) sequences. Functional differences were observed for musicians (who are experts in the music domain), but not for non-musicians - The right pars orbitalis was activated more strongly in musicians during the encoding of unstructured (atonal) vs. structured (tonal) sequences. In addition, data for musicians showed that a lateral (pre)frontal-parietal network (including the right premotor cortex, right inferior precentral sulcus and left intraparietal sulcus) was activated during WM rehearsal of structured, as compared with unstructured, sequences. Our findings indicate that this network plays a role in strategy-based WM for non-verbal auditory information, corroborating previous results showing a similar network for strategy-based WM for visual and verbal information. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  7. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex.

    Science.gov (United States)

    Henry, Molly J; Obleser, Jonas

    2013-01-01

    Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.

  8. Neural induction from ES cells portrays default commitment but instructive maturation.

    Directory of Open Access Journals (Sweden)

    Nibedita Lenka

    Full Text Available The neural induction has remained a debatable issue pertaining to whether it is a mere default process or it involves precise instructive cues. We have chosen the embryonic stem (ES cell model to address this issue. In a devised monoculture strategy, the cell-cell interaction availed through optimum cell plating density could define the niche for the attainment of efficient in vitro neurogenesis from the ES cells. The medium plating density was found ideal in generating optimum number of progenitors and also yielded about 80% mature neurons in a serum free culture set up barring any exogenous inducers. We could also demarcate and quantify the neural stem cells/progenitors among the heterogeneous cell population of differentiating ES cells using nestin intron II driven EGFP expression as a tool. The one week post-plating was determined to be the critical time window for optimum neural progenitor generation from ES cells that helped us further in purifying these cells and in demonstrating their proliferation and multipotent differentiation potential. Seeding cells at varying densities, we could decipher an interesting paradoxical scenario that interlinked both commitment and maturation with the initial plating density having a vital influence on neuronal maturation but not specification and the secretory factors were apparently playing a key role during this process. Thus it was comprehended that, the neural specification was a default process independent of exogenous factors and cellular interaction. Conversely, a defined number of cells at the specification stage itself seemed critical to provide an auto-/paracrine means of signaling threshold for the maturation process to materialize.

  9. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Neural correlates of distraction and conflict resolution for nonverbal auditory events.

    Science.gov (United States)

    Stewart, Hannah J; Amitay, Sygal; Alain, Claude

    2017-05-09

    In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block's instruction. For the distraction measure, a positive component peaking at ~250 ms was found - a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270-450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.

  11. Intracerebral neural stem cell transplantation improved the auditory of mice with presbycusis.

    Science.gov (United States)

    Ren, Hongmiao; Chen, Jichuan; Wang, Yinan; Zhang, Shichang; Zhang, Bo

    2013-01-01

    Stem cell-based regenerative therapy is a potential cellular therapeutic strategy for patients with incurable brain diseases. Embryonic neural stem cells (NSCs) represent an attractive cell source in regenerative medicine strategies in the treatment of diseased brains. Here, we assess the capability of intracerebral embryonic NSCs transplantation for C57BL/6J mice with presbycusis in vivo. Morphology analyses revealed that the neuronal rate of apoptosis was lower in the aged group (10 months of age) but not in the young group (2 months of age) after NSCs transplantation, while the electrophysiological data suggest that the Auditory Brain Stem Response (ABR) threshold was significantly decreased in the aged group at 2 weeks and 3 weeks after transplantation. By contrast, there was no difference in the aged group at 4 weeks post-transplantation or in the young group at any time post-transplantation. Furthermore, immunofluorescence experiments showed that NSCs differentiated into neurons that engrafted and migrated to the brain, even to sites of lesions. Together, our results demonstrate that NSCs transplantation improve the auditory of C57BL/6J mice with presbycusis.

  12. GABAA receptors in visual and auditory cortex and neural activity changes during basic visual stimulation

    Directory of Open Access Journals (Sweden)

    Pengmin eQin

    2012-12-01

    Full Text Available Recent imaging studies have demonstrated that levels of resting GABA in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABAA receptors, in the changes in brain activity between the eyes closed (EC and eyes open (EO state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: An EO and EC block design, allowing the modelling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [18F]Flumazenil PET measure GABAA receptor binding potentials. It was demonstrated that the local-to-global ratio of GABAA receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABAA receptor binding potential in the visual cortex also predicts the change of functional connectivity between visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABAA receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  13. Targeted neural network interventions for auditory hallucinations: Can TMS inform DBS?

    Science.gov (United States)

    Taylor, Joseph J; Krystal, John H; D'Souza, Deepak C; Gerrard, Jason Lee; Corlett, Philip R

    2017-09-29

    The debilitating and refractory nature of auditory hallucinations (AH) in schizophrenia and other psychiatric disorders has stimulated investigations into neuromodulatory interventions that target the aberrant neural networks associated with them. Internal or invasive forms of brain stimulation such as deep brain stimulation (DBS) are currently being explored for treatment-refractory schizophrenia. The process of developing and implementing DBS is limited by symptom clustering within psychiatric constructs as well as a scarcity of causal tools with which to predict response, refine targeting or guide clinical decisions. Transcranial magnetic stimulation (TMS), an external or non-invasive form of brain stimulation, has shown some promise as a therapeutic intervention for AH but remains relatively underutilized as an investigational probe of clinically relevant neural networks. In this editorial, we propose that TMS has the potential to inform DBS by adding individualized causal evidence to an evaluation processes otherwise devoid of it in patients. Although there are significant limitations and safety concerns regarding DBS, the combination of TMS with computational modeling of neuroimaging and neurophysiological data could provide critical insights into more robust and adaptable network modulation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines

    Directory of Open Access Journals (Sweden)

    Ruijie Li

    2018-04-01

    Full Text Available In vivo two-photon Ca2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca2+ indicators, we recorded the Ca2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca2+ indicators (GECIs, we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors.

  15. Neural Networks for Segregation of Multiple Objects: Visual Figure-Ground Separation and Auditory Pitch Perception.

    Science.gov (United States)

    Wyse, Lonce

    An important component of perceptual object recognition is the segmentation into coherent perceptual units of the "blooming buzzing confusion" that bombards the senses. The work presented herein develops neural network models of some key processes of pre-attentive vision and audition that serve this goal. A neural network model, called an FBF (Feature -Boundary-Feature) network, is proposed for automatic parallel separation of multiple figures from each other and their backgrounds in noisy images. Figure-ground separation is accomplished by iterating operations of a Boundary Contour System (BCS) that generates a boundary segmentation of a scene, and a Feature Contour System (FCS) that compensates for variable illumination and fills-in surface properties using boundary signals. A key new feature is the use of the FBF filling-in process for the figure-ground separation of connected regions, which are subsequently more easily recognized. The new CORT-X 2 model is a feed-forward version of the BCS that is designed to detect, regularize, and complete boundaries in up to 50 percent noise. It also exploits the complementary properties of on-cells and off -cells to generate boundary segmentations and to compensate for boundary gaps during filling-in. In the realm of audition, many sounds are dominated by energy at integer multiples, or "harmonics", of a fundamental frequency. For such sounds (e.g., vowels in speech), the individual frequency components fuse, so that they are perceived as one sound source with a pitch at the fundamental frequency. Pitch is integral to separating auditory sources, as well as to speaker identification and speech understanding. A neural network model of pitch perception called SPINET (SPatial PItch NETwork) is developed and used to simulate a broader range of perceptual data than previous spectral models. The model employs a bank of narrowband filters as a simple model of basilar membrane mechanics, spectral on-center off-surround competitive

  16. Impaired neuronal maturation of hippocampal neural progenitor cells in mice lacking CRAF.

    Science.gov (United States)

    Pfeiffer, Verena; Götz, Rudolf; Camarero, Guadelupe; Heinsen, Helmut; Blum, Robert; Rapp, Ulf Rüdiger

    2018-01-01

    RAF kinases are major constituents of the mitogen activated signaling pathway, regulating cell proliferation, differentiation and cell survival of many cell types, including neurons. In mammals, the family of RAF proteins consists of three members, ARAF, BRAF, and CRAF. Ablation of CRAF kinase in inbred mouse strains causes major developmental defects during fetal growth and embryonic or perinatal lethality. Heterozygous germline mutations in CRAF result in Noonan syndrome, which is characterized by neurocognitive impairment that may involve hippocampal physiology. The role of CRAF signaling during hippocampal development and generation of new postnatal hippocampal granule neurons has not been examined and may provide novel insight into the cause of hippocampal dysfunction in Noonan syndrome. In this study, by crossing CRAF-deficiency to CD-1 outbred mice, a CRAF mouse model was established which enabled us to investigate the interplay of neural progenitor proliferation and postmitotic differentiation during adult neurogenesis in the hippocampus. Albeit the general morphology of the hippocampus was unchanged, CRAF-deficient mice displayed smaller granule cell layer (GCL) volume at postnatal day 30 (P30). In CRAF-deficient mice a substantial number of abnormal, chromophilic, fast dividing cells were found in the subgranular zone (SGZ) and hilus of the dentate gyrus (DG), indicating that CRAF signaling contributes to hippocampal neural progenitor proliferation. CRAF-deficient neural progenitor cells showed an increased cell death rate and reduced neuronal maturation. These results indicate that CRAF function affects postmitotic neural cell differentiation and points to a critical role of CRAF-dependent growth factor signaling pathway in the postmitotic development of adult-born neurons.

  17. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    Science.gov (United States)

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus.

    Science.gov (United States)

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M

    2016-01-01

    Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects". This neural processing often occurs in the presence of acoustic-signal distortions from noise and reverberation (e.g., in a busy restaurant). A difference in periodicity between sounds is a strong segregation cue under quiet, anechoic conditions. However, noise and reverberation exert differential effects on speech intelligibility under "cocktail-party" listening conditions. Previous neurophysiological studies have concentrated on understanding auditory scene analysis under ideal listening conditions. Here, we examine the effects of noise and reverberation on periodicity-based neural segregation of concurrent vowels /a/ and /i/, in the responses of single units in the guinea-pig ventral cochlear nucleus (VCN): the first processing station of the auditory brain stem. In line with human psychoacoustic data, we find reverberation significantly impairs segregation when vowels have an intonated pitch contour, but not when they are spoken on a monotone. In contrast, noise impairs segregation independent of intonation pattern. These results are informative for models of speech processing under ecologically valid listening conditions, where noise and reverberation abound.

  19. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    Directory of Open Access Journals (Sweden)

    Karin Petrini

    Full Text Available In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music, visual (musician's movements only, and auditory emotional (music only displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound than for emotionally matching music performances (combining the musician's movements with matching emotional sound as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  20. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  1. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  2. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  4. Neural biomarkers for dyslexia, ADHD and ADD in the auditory cortex of children

    OpenAIRE

    Bettina Serrallach; Christine Gross; Valdis Bernhofs; Dorte Engelmann; Jan Benner; Jan Benner; Nadine Gündert; Maria Blatow; Martina Wengenroth; Angelika Seitz; Monika Brunner; Stefan Seither; Stefan Seither; Richard Parncutt; Peter Schneider

    2016-01-01

    Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N=147) using neuroimaging, magnet-encephalography and psychoacoustics. All disorder subgroups exhibited an ...

  5. Neural biomarkers for dyslexia, ADHD and ADD in the auditory cortex of children

    Directory of Open Access Journals (Sweden)

    Bettina Serrallach

    2016-07-01

    Full Text Available Dyslexia, attention deficit hyperactivity disorder (ADHD, and attention deficit disorder (ADD show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N=147 using neuroimaging, magnet-encephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only a clear discrimination between two subtypes of attentional disorders (ADHD and ADD, a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.

  6. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    2016-10-01

    Full Text Available In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release has not been well characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus as well as a broad P3b-like potential (between ~300 and 600 ms with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  7. [Presbycusis: neural degeneration and aging on the auditory receptor of C57/BL6J mice].

    Science.gov (United States)

    Castillo, E; Carricondo, F; Bartolomé, M V; Vicente-Torres, A; Poch Broto, J; Gil-Loyzaga, P

    2006-11-01

    Presbycusis is a progressive hearing impairment associated with aging, characterized by hearing loss and a degeneration of cochlear structures. In this paper we analyze the effects of aging on the auditory system of C57/BL6J mice, with electrophysiological and morphological studies. With this aim the auditory potentials of mice aging 1, 3, 6, 9, 12, 15, 18, 21 and 24 months were recorded, and then the morphology of the cochleal were analyzed. Auditory potentials revealed an increase in wave latencies, as well as a decrease in their amplitudes during aging. Morphological results showed a total Corti's organ degeneration, being replaced by a flat epithelial layer, and a total absence of hair cells.

  8. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  9. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  10. Left-Right Asymmetry of Maturation Rates in Human Embryonic Neural Development.

    Science.gov (United States)

    de Kovel, Carolien G F; Lisgo, Steven; Karlebach, Guy; Ju, Jia; Cheng, Gang; Fisher, Simon E; Francks, Clyde

    2017-08-01

    Left-right asymmetry is a fundamental organizing feature of the human brain, and neuropsychiatric disorders such as schizophrenia sometimes involve alterations of brain asymmetry. As early as 8 weeks postconception, the majority of human fetuses move their right arms more than their left arms, but because nerve fiber tracts are still descending from the forebrain at this stage, spinal-muscular asymmetries are likely to play an important developmental role. We used RNA sequencing to measure gene expression levels in the left and right spinal cords, and the left and right hindbrains, of 18 postmortem human embryos aged 4 to 8 weeks postconception. Genes showing embryonic lateralization were tested for an enrichment of signals in genome-wide association data for schizophrenia. The left side of the embryonic spinal cord was found to mature faster than the right side. Both sides transitioned from transcriptional profiles associated with cell division and proliferation at earlier stages to neuronal differentiation and function at later stages, but the two sides were not in synchrony (p = 2.2 E-161). The hindbrain showed a left-right mirrored pattern compared with the spinal cord, consistent with the well-known crossing over of function between these two structures. Genes that showed lateralization in the embryonic spinal cord were enriched for association signals with schizophrenia (p = 4.3 E-05). These are the earliest stage left-right differences of human neural development ever reported. Disruption of the lateralized developmental program may play a role in the genetic susceptibility to schizophrenia. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Self vs. other: neural correlates underlying agent identification based on unimodal auditory information as revealed by electrotomography (sLORETA).

    Science.gov (United States)

    Justen, C; Herbert, C; Werner, K; Raab, M

    2014-02-14

    Recent neuroscientific studies have identified activity changes in an extensive cerebral network consisting of medial prefrontal cortex, precuneus, temporo-parietal junction, and temporal pole during the perception and identification of self- and other-generated stimuli. Because this network is supposed to be engaged in tasks which require agent identification, it has been labeled the evaluation network (e-network). The present study used self- versus other-generated movement sounds (long jumps) and electroencephalography (EEG) in order to unravel the neural dynamics of agent identification for complex auditory information. Participants (N=14) performed an auditory self-other identification task with EEG. Data was then subjected to a subsequent standardized low-resolution brain electromagnetic tomography (sLORETA) analysis (source localization analysis). Differences between conditions were assessed using t-statistics (corrected for multiple testing) on the normalized and log-transformed current density values of the sLORETA images. Three-dimensional sLORETA source localization analysis revealed cortical activations in brain regions mostly associated with the e-network, especially in the medial prefrontal cortex (bilaterally in the alpha-1-band and right-lateralized in the gamma-band) and the temporo-parietal junction (right hemisphere in the alpha-1-band). Taken together, the findings are partly consistent with previous functional neuroimaging studies investigating unimodal visual or multimodal agent identification tasks (cf. e-network) and extent them to the auditory domain. Cortical activations in brain regions of the e-network seem to have functional relevance, especially the significantly higher cortical activation in the right medial prefrontal cortex. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. GABA(A) receptors in visual and auditory cortex and neural activity changes during basic visual stimulation.

    Science.gov (United States)

    Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg

    2012-01-01

    Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  13. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    that the use of spectral cues remained plausible. Simulations of auditory-nerve representations of the complex tones further suggested that a spectrotemporal mechanism combining precise timing information across auditory channels might best account for the behavioral data. Overall, this work provides insights...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...... of reduced frequency selectivity on binaural pitch perception, the ability to accurately process the temporal fine structure (TFS) of sounds at the output of the cochlear filters was found to be essential for perceiving binaural pitch. Monaural TFS processing also played a major and independent role...

  14. Neural Hyperactivity of the Central Auditory System in Response to Peripheral Damage

    Directory of Open Access Journals (Sweden)

    Yi Zhao

    2016-01-01

    Full Text Available It is increasingly appreciated that cochlear pathology is accompanied by adaptive responses in the central auditory system. The cause of cochlear pathology varies widely, and it seems that few commonalities can be drawn. In fact, despite intricate internal neuroplasticity and diverse external symptoms, several classical injury models provide a feasible path to locate responses to different peripheral cochlear lesions. In these cases, hair cell damage may lead to considerable hyperactivity in the central auditory pathways, mediated by a reduction in inhibition, which may underlie some clinical symptoms associated with hearing loss, such as tinnitus. Homeostatic plasticity, the most discussed and acknowledged mechanism in recent years, is most likely responsible for excited central activity following cochlear damage.

  15. Low-level neural auditory discrimination dysfunctions in specific language impairment—A review on mismatch negativity findings

    Directory of Open Access Journals (Sweden)

    Teija Kujala

    2017-12-01

    Full Text Available In specific language impairment (SLI, there is a delay in the child’s oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception. Keywords: Specific language impairment, Auditory processing, Mismatch negativity (MMN

  16. Neural responses in the primary auditory cortex of freely behaving cats while discriminating fast and slow click-trains.

    Science.gov (United States)

    Dong, Chao; Qin, Ling; Liu, Yongchun; Zhang, Xinan; Sato, Yu

    2011-01-01

    Repeated acoustic events are ubiquitous temporal features of natural sounds. To reveal the neural representation of the sound repetition rate, a number of electrophysiological studies have been conducted on various mammals and it has been proposed that both the spike-time and firing rate of primary auditory cortex (A1) neurons encode the repetition rate. However, previous studies rarely examined how the experimental animals perceive the difference in the sound repetition rate, and a caveat to these experiments is that they compared physiological data obtained from animals with psychophysical data obtained from humans. In this study, for the first time, we directly investigated acoustic perception and the underlying neural mechanisms in the same experimental animal by examining spike activities in the A1 of free-moving cats while performing a Go/No-go task to discriminate the click-trains at different repetition rates (12.5-200 Hz). As reported by previous studies on passively listening animals, A1 neurons showed both synchronized and non-synchronized responses to the click-trains. We further found that the neural performance estimated from the precise temporal information of synchronized units was good enough to distinguish all 16.7-200 Hz from the 12.5 Hz repetition rate; however, the cats showed declining behavioral performance with the decrease of the target repetition rate, indicating an increase of difficulty in discriminating two slower click-trains. Such behavioral performance was well explained by the firing rate of some synchronized and non-synchronized units. Trial-by-trial analysis indicated that A1 activity was not affected by the cat's judgment of behavioral response. Our results suggest that the main function of A1 is to effectively represent temporal signals using both spike timing and firing rate, while the cats may read out the rate-coding information to perform the task in this experiment.

  17. Neural responses in the primary auditory cortex of freely behaving cats while discriminating fast and slow click-trains.

    Directory of Open Access Journals (Sweden)

    Chao Dong

    Full Text Available Repeated acoustic events are ubiquitous temporal features of natural sounds. To reveal the neural representation of the sound repetition rate, a number of electrophysiological studies have been conducted on various mammals and it has been proposed that both the spike-time and firing rate of primary auditory cortex (A1 neurons encode the repetition rate. However, previous studies rarely examined how the experimental animals perceive the difference in the sound repetition rate, and a caveat to these experiments is that they compared physiological data obtained from animals with psychophysical data obtained from humans. In this study, for the first time, we directly investigated acoustic perception and the underlying neural mechanisms in the same experimental animal by examining spike activities in the A1 of free-moving cats while performing a Go/No-go task to discriminate the click-trains at different repetition rates (12.5-200 Hz. As reported by previous studies on passively listening animals, A1 neurons showed both synchronized and non-synchronized responses to the click-trains. We further found that the neural performance estimated from the precise temporal information of synchronized units was good enough to distinguish all 16.7-200 Hz from the 12.5 Hz repetition rate; however, the cats showed declining behavioral performance with the decrease of the target repetition rate, indicating an increase of difficulty in discriminating two slower click-trains. Such behavioral performance was well explained by the firing rate of some synchronized and non-synchronized units. Trial-by-trial analysis indicated that A1 activity was not affected by the cat's judgment of behavioral response. Our results suggest that the main function of A1 is to effectively represent temporal signals using both spike timing and firing rate, while the cats may read out the rate-coding information to perform the task in this experiment.

  18. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model

    Directory of Open Access Journals (Sweden)

    Thilo eRode

    2013-10-01

    Full Text Available The auditory midbrain implant (AMI consists of a single shank array (20 sites for stimulation along the tonotopic axis of the central nucleus of the inferior colliculus (ICC and has been safely implanted in deaf patients who cannot benefit from a cochlear implant (CI. The AMI improves lip-reading abilities and environmental awareness in the implanted patients. However, the AMI cannot achieve the high levels of speech perception possible with the CI. It appears the AMI can transmit sufficient spectral cues but with limited temporal cues required for speech understanding. Currently, the AMI uses a CI-based strategy, which was originally designed to stimulate each frequency region along the cochlea with amplitude-modulated pulse trains matching the envelope of the bandpass-filtered sound components. However, it is unclear if this type of stimulation with only a single site within each frequency lamina of the ICC can elicit sufficient temporal cues for speech perception. At least speech understanding in quiet is still possible with envelope cues as low as 50 Hz. Therefore, we investigated how ICC neurons follow the bandpass-filtered envelope structure of natural stimuli in ketamine-anesthetized guinea pigs. We identified a subset of ICC neurons that could closely follow the envelope structure (up to ~100 Hz of a diverse set of species-specific calls, which was revealed by using a peripheral ear model to estimate the true bandpass-filtered envelopes observed by the brain. Although previous studies have suggested a complex neural transformation from the auditory nerve to the ICC, our data suggest that the brain maintains a robust temporal code in a subset of ICC neurons matching the envelope structure of natural stimuli. Clinically, these findings suggest that a CI-based strategy may still be effective for the AMI if the appropriate neurons are entrained to the envelope of the acoustic stimulus and can transmit sufficient temporal cues to higher

  19. Influence of auditory attention on sentence recognition captured by the neural phase.

    Science.gov (United States)

    Müller, Jana Annina; Kollmeier, Birger; Debener, Stefan; Brand, Thomas

    2018-03-07

    The aim of this study was to investigate whether attentional influences on speech recognition are reflected in the neural phase entrained by an external modulator. Sentences were presented in 7 Hz sinusoidally modulated noise while the neural response to that modulation frequency was monitored by electroencephalogram (EEG) recordings in 21 participants. We implemented a selective attention paradigm including three different attention conditions while keeping physical stimulus parameters constant. The participants' task was either to repeat the sentence as accurately as possible (speech recognition task), to count the number of decrements implemented in modulated noise (decrement detection task), or to do both (dual task), while the EEG was recorded. Behavioural analysis revealed reduced performance in the dual task condition for decrement detection, possibly reflecting limited cognitive resources. EEG analysis revealed no significant differences in power for the 7 Hz modulation frequency, but an attention-dependent phase difference between tasks. Further phase analysis revealed a significant difference 500 ms after sentence onset between trials with correct and incorrect responses for speech recognition, indicating that speech recognition performance and the neural phase are linked via selective attention mechanisms, at least shortly after sentence onset. However, the neural phase effects identified were small and await further investigation. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Maturation of the auditory system in clinically normal puppies as reflected by the brain stem auditory-evoked potential wave V latency-intensity curve and rarefaction-condensation differential potentials.

    Science.gov (United States)

    Poncelet, L C; Coppens, A G; Meuris, S I; Deltenre, P F

    2000-11-01

    To evaluate auditory maturation in puppies. Ten clinically normal Beagle puppies. Puppies were examined repeatedly from days 11 to 36 after birth (8 measurements). Click-evoked brain stem auditory-evoked potentials (BAEP) were obtained in response to rarefaction and condensation click stimuli from 90 dB normal hearing level to wave V threshold, using steps of 10 dB. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation differential potential (RCDP). Steps of 5 dB were used to determine thresholds of RCDP and wave V. Slope of the low-intensity segment of the wave V latency-intensity curve was calculated. The intensity range at which RCDP could not be recorded (ie, pre-RCDP range) was calculated by subtracting the threshold of wave V from threshold of RCDP RESULTS: Slope of the wave V latency-intensity curve low-intensity segment evolved with age, changing from (mean +/- SD) -90.8 +/- 41.6 to -27.8 +/- 4.1 micros/dB. Similar results were obtained from days 23 through 36. The pre-RCDP range diminished as puppies became older, decreasing from 40.0 +/- 7.5 to 20.5 +/- 6.4 dB. Changes in slope of the latency-intensity curve with age suggest enlargement of the audible range of frequencies toward high frequencies up to the third week after birth. Decrease in the pre-RCDP range may indicate an increase of the audible range of frequencies toward low frequencies. Age-related reference values will assist clinicians in detecting hearing loss in puppies.

  1. A realistic neural mass model of the cortex with laminar-specific connections and synaptic plasticity - evaluation with auditory habituation.

    Directory of Open Access Journals (Sweden)

    Peng Wang

    Full Text Available In this work we propose a biologically realistic local cortical circuit model (LCCM, based on neural masses, that incorporates important aspects of the functional organization of the brain that have not been covered by previous models: (1 activity dependent plasticity of excitatory synaptic couplings via depleting and recycling of neurotransmitters and (2 realistic inter-laminar dynamics via laminar-specific distribution of and connections between neural populations. The potential of the LCCM was demonstrated by accounting for the process of auditory habituation. The model parameters were specified using Bayesian inference. It was found that: (1 besides the major serial excitatory information pathway (layer 4 to layer 2/3 to layer 5/6, there exists a parallel "short-cut" pathway (layer 4 to layer 5/6, (2 the excitatory signal flow from the pyramidal cells to the inhibitory interneurons seems to be more intra-laminar while, in contrast, the inhibitory signal flow from inhibitory interneurons to the pyramidal cells seems to be both intra- and inter-laminar, and (3 the habituation rates of the connections are unsymmetrical: forward connections (from layer 4 to layer 2/3 are more strongly habituated than backward connections (from Layer 5/6 to layer 4. Our evaluation demonstrates that the novel features of the LCCM are of crucial importance for mechanistic explanations of brain function. The incorporation of these features into a mass model makes them applicable to modeling based on macroscopic data (like EEG or MEG, which are usually available in human experiments. Our LCCM is therefore a valuable building block for future realistic models of human cognitive function.

  2. Neural representation of calling songs and their behavioral relevance in the grasshopper auditory system

    Directory of Open Access Journals (Sweden)

    Gundula eMeckenhäuser

    2014-12-01

    Full Text Available Acoustic communication plays a key role for mate attraction in grasshoppers. Males use songs to advertise themselves to females. Females evaluate the song pattern, a repetitive structure of sound syllables separated by short pauses, to recognize a conspecific male and as proxy to its fitness. In their natural habitat females often receive songs with degraded temporal structure. Perturbations may, for example, result from the overlap with other songs. We studied the response behavior of females to songs that show different signal degradations. A perturbation of an otherwise attractive song at later positions in the syllable diminished the behavioral response, whereas the same perturbation at the onset of a syllable did not affect song attractiveness. We applied naïve Bayes classifiers to the spike trains of identified neurons in the auditory pathway to explore how sensory evidence about the acoustic stimulus and its attractiveness is represented in the neuronal responses. We find that populations of three or more neurons were sufficient to reliably decode the acoustic stimulus and to predict its behavioral relevance from the single-trial integrated firing rate. A simple model of decision making simulates the female response behavior. It computes for each syllable the likelihood for the presence of an attractive song pattern as evidenced by the population firing rate. Integration across syllables allows the likelihood to reach a decision threshold and to elicit the behavioral response. The close match between model performance and animal behavior shows that a spike rate code is sufficient to enable song pattern recognition.

  3. White blood cell differential count of maturation stages in bone marrow smear using dual-stage convolutional neural networks.

    Science.gov (United States)

    Choi, Jin Woo; Ku, Yunseo; Yoo, Byeong Wook; Kim, Jung-Ah; Lee, Dong Soon; Chai, Young Jun; Kong, Hyoun-Joong; Kim, Hee Chan

    2017-01-01

    The white blood cell differential count of the bone marrow provides information concerning the distribution of immature and mature cells within maturation stages. The results of such examinations are important for the diagnosis of various diseases and for follow-up care after chemotherapy. However, manual, labor-intensive methods to determine the differential count lead to inter- and intra-variations among the results obtained by hematologists. Therefore, an automated system to conduct the white blood cell differential count is highly desirable, but several difficulties hinder progress. There are variations in the white blood cells of each maturation stage, small inter-class differences within each stage, and variations in images because of the different acquisition and staining processes. Moreover, a large number of classes need to be classified for bone marrow smear analysis, and the high density of touching cells in bone marrow smears renders difficult the segmentation of single cells, which is crucial to traditional image processing and machine learning. Few studies have attempted to discriminate bone marrow cells, and even these have either discriminated only a few classes or yielded insufficient performance. In this study, we propose an automated white blood cell differential counting system from bone marrow smear images using a dual-stage convolutional neural network (CNN). A total of 2,174 patch images were collected for training and testing. The dual-stage CNN classified images into 10 classes of the myeloid and erythroid maturation series, and achieved an accuracy of 97.06%, a precision of 97.13%, a recall of 97.06%, and an F-1 score of 97.1%. The proposed method not only showed high classification performance, but also successfully classified raw images without single cell segmentation and manual feature extraction by implementing CNN. Moreover, it demonstrated rotation and location invariance. These results highlight the promise of the proposed method

  4. White blood cell differential count of maturation stages in bone marrow smear using dual-stage convolutional neural networks

    Science.gov (United States)

    Choi, Jin Woo; Ku, Yunseo; Yoo, Byeong Wook; Kim, Jung-Ah; Lee, Dong Soon; Chai, Young Jun; Kong, Hyoun-Joong

    2017-01-01

    The white blood cell differential count of the bone marrow provides information concerning the distribution of immature and mature cells within maturation stages. The results of such examinations are important for the diagnosis of various diseases and for follow-up care after chemotherapy. However, manual, labor-intensive methods to determine the differential count lead to inter- and intra-variations among the results obtained by hematologists. Therefore, an automated system to conduct the white blood cell differential count is highly desirable, but several difficulties hinder progress. There are variations in the white blood cells of each maturation stage, small inter-class differences within each stage, and variations in images because of the different acquisition and staining processes. Moreover, a large number of classes need to be classified for bone marrow smear analysis, and the high density of touching cells in bone marrow smears renders difficult the segmentation of single cells, which is crucial to traditional image processing and machine learning. Few studies have attempted to discriminate bone marrow cells, and even these have either discriminated only a few classes or yielded insufficient performance. In this study, we propose an automated white blood cell differential counting system from bone marrow smear images using a dual-stage convolutional neural network (CNN). A total of 2,174 patch images were collected for training and testing. The dual-stage CNN classified images into 10 classes of the myeloid and erythroid maturation series, and achieved an accuracy of 97.06%, a precision of 97.13%, a recall of 97.06%, and an F-1 score of 97.1%. The proposed method not only showed high classification performance, but also successfully classified raw images without single cell segmentation and manual feature extraction by implementing CNN. Moreover, it demonstrated rotation and location invariance. These results highlight the promise of the proposed method

  5. White blood cell differential count of maturation stages in bone marrow smear using dual-stage convolutional neural networks.

    Directory of Open Access Journals (Sweden)

    Jin Woo Choi

    Full Text Available The white blood cell differential count of the bone marrow provides information concerning the distribution of immature and mature cells within maturation stages. The results of such examinations are important for the diagnosis of various diseases and for follow-up care after chemotherapy. However, manual, labor-intensive methods to determine the differential count lead to inter- and intra-variations among the results obtained by hematologists. Therefore, an automated system to conduct the white blood cell differential count is highly desirable, but several difficulties hinder progress. There are variations in the white blood cells of each maturation stage, small inter-class differences within each stage, and variations in images because of the different acquisition and staining processes. Moreover, a large number of classes need to be classified for bone marrow smear analysis, and the high density of touching cells in bone marrow smears renders difficult the segmentation of single cells, which is crucial to traditional image processing and machine learning. Few studies have attempted to discriminate bone marrow cells, and even these have either discriminated only a few classes or yielded insufficient performance. In this study, we propose an automated white blood cell differential counting system from bone marrow smear images using a dual-stage convolutional neural network (CNN. A total of 2,174 patch images were collected for training and testing. The dual-stage CNN classified images into 10 classes of the myeloid and erythroid maturation series, and achieved an accuracy of 97.06%, a precision of 97.13%, a recall of 97.06%, and an F-1 score of 97.1%. The proposed method not only showed high classification performance, but also successfully classified raw images without single cell segmentation and manual feature extraction by implementing CNN. Moreover, it demonstrated rotation and location invariance. These results highlight the promise of

  6. Neural network approach in multichannel auditory event-related potential analysis.

    Science.gov (United States)

    Wu, F Y; Slater, J D; Ramsay, R E

    1994-04-01

    Even though there are presently no clearly defined criteria for the assessment of P300 event-related potential (ERP) abnormality, it is strongly indicated through statistical analysis that such criteria exist for classifying control subjects and patients with diseases resulting in neuropsychological impairment such as multiple sclerosis (MS). We have demonstrated the feasibility of artificial neural network (ANN) methods in classifying ERP waveforms measured at a single channel (Cz) from control subjects and MS patients. In this paper, we report the results of multichannel ERP analysis and a modified network analysis methodology to enhance automation of the classification rule extraction process. The proposed methodology significantly reduces the work of statistical analysis. It also helps to standardize the criteria of P300 ERP assessment and facilitate the computer-aided analysis on neuropsychological functions.

  7. The influence of cochlear traveling wave and neural adaptation on auditory brainstem responses

    DEFF Research Database (Denmark)

    Junius, D.; Dau, Torsten

    2005-01-01

    of the responses to the single components, as a function of stimulus level. In the first experiment, a single rising chirp was temporally and spectrally embedded in two steady-state tones. In the second experiment, the stimulus consisted of a continuous alternating train of chirps: each rising chirp was followed...... by the temporally reversed (falling) chirp. In both experiments, the transitions between stimulus components were continuous. For stimulation levels up to approximately 70 dB SPL, the responses to the embedded chirp corresponded to the responses to the single chirp. At high stimulus levels (80-100 dB SPL......), disparities occurred between the responses, reflecting a nonlinearity in the processing when neural activity is integrated across frequency. In the third experiment, the effect of within-train rate on wave-V response was investigated. The response to the chirp presented at a within-train rate of 95 Hz...

  8. Effects of Fish Oil Supplementation during the Suckling Period on Auditory Neural Conduction in n-3 Fatty Acid-Deficient Rat Pups

    Directory of Open Access Journals (Sweden)

    vida rahimi

    2014-07-01

    Full Text Available Abstract Introduction: Omega 3 fatty acid especially in the form of fish oil, has structural and biological role in the body's various systems especially nervous system. Numerous studies have tried to research about it. Auditory is one of the affected systems. Omega 3 deficiency can have devastating effects on the nervous system and auditory. This study aimed to evaluate neural conduction in n-3 fatty acid-deficient rat pups following the supplementation of fish oil consumption during the suckling period Materials and Methods: In this interventional and experimental study, one sources of omega3 fatty acid (fish oil were fed to rat pups of n-3 PUFA-deficient dams to compare changes in their auditory neural conduction with that of control and n-3 PUFA-deficient groups, using Auditory Brainstem Response (ABR. The parameters of interest were P1, P3, P4 absolute latency, P1-P3, P1-P4 and P3-P4 IPL , P4/P1 amplitude ratio . The rat pups were given oral fish oil, 5 Ml /g weight for 17 days, between the age of 5 and 21 days. Results There were no significant group differences in P1 and P3 absolute latency (p > 0.05. but the result in P4 was significant(P ≤ 0.05 . The n-3 PUFA deficient +vehicle had the most prolonged (the worst P1-P4 IPL and P3-P4 IPL compared with control and n-3 PUFA deficient + FO groups. There was no significant difference in P1-P4 IPL and P3-P4 IPL between n-3 PUFA deficient + FO and control groups (p > 0.05.There was a significant effect of diet on P1-P4 IPL and P3-P4 IPL between groups (P ≤ 0.05. Conclusion: The results of present study showed the effect of omega3 deficiency on auditory neural structure during pregnancy and lactation period. Additionally, we observed the reduced devastating effects on neural conduction in n-3 fatty acid-deficient rat pups following the supplementation of fish oil during the suckling period

  9. Induced Pluripotent Stem Cell-Derived Neural Cells Survive and Mature in the Nonhuman Primate Brain

    Directory of Open Access Journals (Sweden)

    Marina E. Emborg

    2013-03-01

    Full Text Available The generation of induced pluripotent stem cells (iPSCs opens up the possibility for personalized cell therapy. Here, we show that transplanted autologous rhesus monkey iPSC-derived neural progenitors survive for up to 6 months and differentiate into neurons, astrocytes, and myelinating oligodendrocytes in the brains of MPTP-induced hemiparkinsonian rhesus monkeys with a minimal presence of inflammatory cells and reactive glia. This finding represents a significant step toward personalized regenerative therapies.

  10. Classification of dry-cured hams according to the maturation time using near infrared spectra and artificial neural networks.

    Science.gov (United States)

    Prevolnik, M; Andronikov, D; Žlender, B; Font-i-Furnols, M; Novič, M; Škorjanc, D; Čandek-Potokar, M

    2014-01-01

    An attempt to classify dry-cured hams according to the maturation time on the basis of near infrared (NIR) spectra was studied. The study comprised 128 samples of biceps femoris (BF) muscle from dry-cured hams matured for 10 (n=32), 12 (n=32), 14 (n=32) or 16 months (n=32). Samples were minced and scanned in the wavelength range from 400 to 2500 nm using spectrometer NIR System model 6500 (Silver Spring, MD, USA). Spectral data were used for i) splitting of samples into the training and test set using 2D Kohonen artificial neural networks (ANN) and for ii) construction of classification models using counter-propagation ANN (CP-ANN). Different models were tested, and the one selected was based on the lowest percentage of misclassified test samples (external validation). Overall correctness of the classification was 79.7%, which demonstrates practical relevance of using NIR spectroscopy and ANN for dry-cured ham processing control. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Induced pluripotent stem cell-derived neural cells survive and mature in the nonhuman primate brain.

    Science.gov (United States)

    Emborg, Marina E; Liu, Yan; Xi, Jiajie; Zhang, Xiaoqing; Yin, Yingnan; Lu, Jianfeng; Joers, Valerie; Swanson, Christine; Holden, James E; Zhang, Su-Chun

    2013-03-28

    The generation of induced pluripotent stem cells (iPSCs) opens up the possibility for personalized cell therapy. Here, we show that transplanted autologous rhesus monkey iPSC-derived neural progenitors survive for up to 6 months and differentiate into neurons, astrocytes, and myelinating oligodendrocytes in the brains of MPTP-induced hemiparkinsonian rhesus monkeys with a minimal presence of inflammatory cells and reactive glia. This finding represents a significant step toward personalized regenerative therapies. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  12. BDNF Increases Survival and Neuronal Differentiation of Human Neural Precursor Cells Cotransplanted with a Nanofiber Gel to the Auditory Nerve in a Rat Model of Neuronal Damage

    Directory of Open Access Journals (Sweden)

    Yu Jiao

    2014-01-01

    Full Text Available Objectives. To study possible nerve regeneration of a damaged auditory nerve by the use of stem cell transplantation. Methods. We transplanted HNPCs to the rat AN trunk by the internal auditory meatus (IAM. Furthermore, we studied if addition of BDNF affects survival and phenotypic differentiation of the grafted HNPCs. A bioactive nanofiber gel (PA gel, in selected groups mixed with BDNF, was applied close to the implanted cells. Before transplantation, all rats had been deafened by a round window niche application of β-bungarotoxin. This neurotoxin causes a selective toxic destruction of the AN while keeping the hair cells intact. Results. Overall, HNPCs survived well for up to six weeks in all groups. However, transplants receiving the BDNF-containing PA gel demonstrated significantly higher numbers of HNPCs and neuronal differentiation. At six weeks, a majority of the HNPCs had migrated into the brain stem and differentiated. Differentiated human cells as well as neurites were observed in the vicinity of the cochlear nucleus. Conclusion. Our results indicate that human neural precursor cells (HNPC integration with host tissue benefits from additional brain derived neurotrophic factor (BDNF treatment and that these cells appear to be good candidates for further regenerative studies on the auditory nerve (AN.

  13. Anticipation of peer evaluation in anxious adolescents: divergence in neural activation and maturation.

    Science.gov (United States)

    Spielberg, Jeffrey M; Jarcho, Johanna M; Dahl, Ronald E; Pine, Daniel S; Ernst, Monique; Nelson, Eric E

    2015-08-01

    Adolescence is the time of peak onset for many anxiety disorders, particularly Social Anxiety Disorder. Research using simulated social interactions consistently finds differential activation in several brain regions in anxious (vs non-anxious) youth, including amygdala, striatum and medial prefrontal cortex. However, few studies examined the anticipation of peer interactions, a key component in the etiology and maintenance of anxiety disorders. Youth completed the Chatroom Task while undergoing functional magnetic resonance imaging. Patterns of neural activation were assessed in anxious and non-anxious youth as they were cued to anticipate social feedback from peers. Anxious participants evidenced greater amygdala activation and rostral anterior cingulate (rACC)↔amygdala coupling than non-anxious participants during anticipation of feedback from peers they had previously rejected; anxious participants also evidenced less nucleus accumbens activation during anticipation of feedback from selected peers. Finally, anxiety interacted with age in rACC: in anxious participants, age was positively associated with activation to anticipated feedback from rejected peers and negatively for selected peers, whereas the opposite pattern emerged for non-anxious youth. Overall, anxious youth showed greater reactivity in anticipation of feedback from rejected peers and thus may ascribe greater salience to these potential interactions and increase the likelihood of avoidance behavior. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Cell-type-specific responses of RT4 neural cell lines to dibutyryl-cAMP: branch determination versus maturation

    International Nuclear Information System (INIS)

    Droms, K.; Sueoka, N.

    1987-01-01

    This report describes the induction of cell-type-specific maturation, by dibutyryl-cAMP and testololactone, of neuronal and glial properties in a family of cell lines derived from a rat peripheral neurotumor, RT4. This maturation allows further understanding of the process of determination because of the close lineage relationship between the cell types of the RT4 family. The RT4 family is characterized by the spontaneous conversion of one of the cell types, RT4-AC (stem-cell type), to any of three derivative cell types, RT4-B, RT4-D, or RT4-E, with a frequency of about 10(-5). The RT4-AC cells express some properties characteristic of both neuronal and glial cells. Of these neural properties expressed by RT4-AC cells, only the neuronal properties are expressed by the RT4-B and RT4-E cells, and only the glial properties are expressed by the RT4-D cells. This in vitro cell-type conversion of RT4-AC to three derivative cell types is a branch point for the coordinate regulation of several properties and seems to resemble determination in vivo. In our standard culture conditions, several other neuronal and glial properties are not expressed by these cell types. However, addition of dibutyryl-cAMP induces expression of additional properties, in a cell-type-specific manner: formation of long cellular processes in the RT4-B8 and RT4-E5 cell lines and expression of high-affinity uptake of gamma-aminobutyric acid, by a glial-cell-specific mechanism, in the RT4-D6-2 cell line. These new properties are maximally expressed 2-3 days after addition of dibutyryl-cAMP

  15. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations

    Directory of Open Access Journals (Sweden)

    Gabriella Musacchia

    2017-08-01

    Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.

  16. Neural stem cells in the immature, but not the mature, subventricular zone respond robustly to traumatic brain injury.

    Science.gov (United States)

    Goodus, Matthew T; Guzman, Alanna M; Calderon, Frances; Jiang, Yuhui; Levison, Steven W

    2015-01-01

    Pediatric traumatic brain injury is a significant problem that affects many children each year. Progress is being made in developing neuroprotective strategies to combat these injuries. However, investigators are a long way from therapies to fully preserve injured neurons and glia. To restore neurological function, regenerative strategies will be required. Given the importance of stem cells in repairing damaged tissues and the known persistence of neural precursors in the subventricular zone (SVZ), we evaluated regenerative responses of the SVZ to a focal brain lesion. As tissues repair more slowly with aging, injury responses of male Sprague Dawley rats at 6, 11, 17, and 60 days of age and C57Bl/6 mice at 14 days of age were compared. In the injured immature animals, cell proliferation in the dorsolateral SVZ more than doubled by 48 h. By contrast, the proliferative response was almost undetectable in the adult brain. Three approaches were used to assess the relative numbers of bona fide neural stem cells, as follows: the neurosphere assay (on rats injured at postnatal day 11, P11), flow cytometry using a novel 4-marker panel (on mice injured at P14) and staining for stem/progenitor cell markers in the niche (on rats injured at P17). Precursors from the injured immature SVZ formed almost twice as many spheres as precursors from uninjured age-matched brains. Furthermore, spheres formed from the injured brain were larger, indicating that the neural precursors that formed these spheres divided more rapidly. Flow cytometry revealed a 2-fold increase in the percentage of stem cells, a 4-fold increase in multipotential progenitor-3 cells and a 2.5-fold increase in glial-restricted progenitor-2/multipotential-3 cells. Analogously, there was a 2-fold increase in the mitotic index of nestin+/Mash1- immunoreactive cells within the immediately subependymal region. As the early postnatal SVZ is predominantly generating glial cells, an expansion of precursors might not

  17. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Neural Correlates of Selective Attention With Hearing Aid Use Followed by ReadMyQuips Auditory Training Program.

    Science.gov (United States)

    Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey

    The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective

  19. Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs.

    Science.gov (United States)

    Larson, David B; Chen, Matthew C; Lungren, Matthew P; Halabi, Safwan S; Stence, Nicholas V; Langlotz, Curtis P

    2018-04-01

    Purpose To compare the performance of a deep-learning bone age assessment model based on hand radiographs with that of expert radiologists and that of existing automated models. Materials and Methods The institutional review board approved the study. A total of 14 036 clinical hand radiographs and corresponding reports were obtained from two children's hospitals to train and validate the model. For the first test set, composed of 200 examinations, the mean of bone age estimates from the clinical report and three additional human reviewers was used as the reference standard. Overall model performance was assessed by comparing the root mean square (RMS) and mean absolute difference (MAD) between the model estimates and the reference standard bone ages. Ninety-five percent limits of agreement were calculated in a pairwise fashion for all reviewers and the model. The RMS of a second test set composed of 913 examinations from the publicly available Digital Hand Atlas was compared with published reports of an existing automated model. Results The mean difference between bone age estimates of the model and of the reviewers was 0 years, with a mean RMS and MAD of 0.63 and 0.50 years, respectively. The estimates of the model, the clinical report, and the three reviewers were within the 95% limits of agreement. RMS for the Digital Hand Atlas data set was 0.73 years, compared with 0.61 years of a previously reported model. Conclusion A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of existing automated models. © RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on January 19, 2018.

  20. Noise exposure alters long-term neural firing rates and synchrony in primary auditory and rostral belt cortices following bimodal stimulation.

    Science.gov (United States)

    Takacs, Joseph D; Forrest, Taylor J; Basura, Gregory J

    2017-12-01

    We previously demonstrated that bimodal stimulation (spinal trigeminal nucleus [Sp5] paired with best frequency tone) altered neural tone-evoked and spontaneous firing rates (SFRs) in primary auditory cortex (A1) 15 min after pairing in guinea pigs with and without noise-induced tinnitus. Neural responses were enhanced (+10 ms) or suppressed (0 ms) based on the bimodal pairing interval. Here we investigated whether bimodal stimulation leads to long-term (up to 2 h) changes in tone-evoked and SFRs and neural synchrony (correlate of tinnitus) and if the long-term bimodal effects are altered following noise exposure. To obviate the effects of permanent hearing loss on the results, firing rates and neural synchrony were measured three weeks following unilateral (left ear) noise exposure and a temporary threshold shift. Simultaneous extra-cellular single-unit recordings were made from contralateral (to noise) A1 and dorsal rostral belt (RB); an associative auditory cortical region thought to influence A1, before and after bimodal stimulation (pairing intervals of 0 ms; simultaneous Sp5-tone and +10 ms; Sp5 precedes tone). Sixty and 120 min after 0 ms pairing tone-evoked and SFRs were suppressed in sham A1; an effect only preserved 120 min following pairing in noise. Stimulation at +10 ms only affected SFRs 120 min after pairing in sham and noise-exposed A1. Within sham RB, pairing at 0 and +10 ms persistently suppressed tone-evoked and SFRs, while 0 ms pairing in noise markedly enhanced tone-evoked and SFRs up to 2 h. Together, these findings suggest that bimodal stimulation has long-lasting effects in A1 that also extend to the associative RB that is altered by noise and may have persistent implications for how noise damaged brains process multi-sensory information. Moreover, prior to bimodal stimulation, noise damage increased neural synchrony in A1, RB and between A1 and RB neurons. Bimodal stimulation led to persistent changes in neural synchrony in

  1. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten

    2007-01-01

    Periodic amplitude modulations AMs of an acoustic stimulus are presumed to be encoded in temporal activity patterns of neurons in the cochlear nucleus. Physiological recordings indicate that this temporal AM code is transformed into a rate-based periodicity code along the ascending auditory pathw...... accounts for the encoding of AM depth over a large dynamic range and for modulation frequency selective processing of complex sounds....

  2. Language-dependent changes in pitch-relevant neural activity in the auditory cortex reflect differential weighting of temporal attributes of pitch contours

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Xu, Yi; Suresh, Chandan H.

    2016-01-01

    There remains a gap in our knowledge base about neural representation of pitch attributes that occur between onset and offset of dynamic, curvilinear pitch contours. The aim is to evaluate how language experience shapes processing of pitch contours as reflected in the amplitude of cortical pitch-specific response components. Responses were elicited from three nonspeech, bidirectional (falling-rising) pitch contours representative of Mandarin Tone 2 varying in location of the turning point with fixed onset and offset. At the frontocentral Fz electrode site, Na–Pb and Pb–Nb amplitude of the Chinese group was larger than the English group for pitch contours exhibiting later location of the turning point relative to the one with the earliest location. Chinese listeners’ amplitude was also greater than that of English in response to those same pitch contours with later turning points. At lateral temporal sites (T7/T8), Na–Pb amplitude was larger in Chinese listeners relative to English over the right temporal site. In addition, Pb–Nb amplitude of the Chinese group showed a rightward asymmetry. The pitch contour with its turning point located about halfway of total duration evoked a rightward asymmetry regardless of group. These findings suggest that neural mechanisms processing pitch in the right auditory cortex reflect experience-dependent modulation of sensitivity to weighted integration of changes in acceleration rates of rising and falling sections and the location of the turning point. PMID:28713201

  3. A novel culture method reveals unique neural stem/progenitors in mature porcine iris tissues that differentiate into neuronal and rod photoreceptor-like cells.

    Science.gov (United States)

    Royall, Lars N; Lea, Daniel; Matsushita, Tamami; Takeda, Taka-Aki; Taketani, Shigeru; Araki, Masasuke

    2017-11-15

    Iris neural stem/progenitor cells from mature porcine eyes were investigated using a new protocol for tissue culture, which consists of dispase treatment and Matrigel embedding. We used a number of culture conditions and found an intense differentiation of neuronal cells from both the iris pigmented epithelial (IPE) cells and the stroma tissue cells. Rod photoreceptor-like cells were also observed but mostly in a later stage of culture. Neuronal differentiation does not require any additives such as fetal bovine serum or FGF2, although FGF2 and IGF2 appeared to promote neural differentiation in the IPE cultures. Furthermore, the stroma-derived cells were able to be maintained in vitro indefinitely. The evolutionary similarity between humans and domestic pigs highlight the potential for this methodology in the modeling of human diseases and characterizing human ocular stem cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : A decoding approach

    NARCIS (Netherlands)

    Van De Putte, Eowyn; De Baene, W.; Price, Cathy J; Duyck, Wouter

    2018-01-01

    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using

  5. Neural stem cells and neuro/gliogenesis in the central nervous system: understanding the structural and functional plasticity of the developing, mature, and diseased brain.

    Science.gov (United States)

    Yamaguchi, Masahiro; Seki, Tatsunori; Imayoshi, Itaru; Tamamaki, Nobuaki; Hayashi, Yoshitaka; Tatebayashi, Yoshitaka; Hitoshi, Seiji

    2016-05-01

    Neurons and glia in the central nervous system (CNS) originate from neural stem cells (NSCs). Knowledge of the mechanisms of neuro/gliogenesis from NSCs is fundamental to our understanding of how complex brain architecture and function develop. NSCs are present not only in the developing brain but also in the mature brain in adults. Adult neurogenesis likely provides remarkable plasticity to the mature brain. In addition, recent progress in basic research in mental disorders suggests an etiological link with impaired neuro/gliogenesis in particular brain regions. Here, we review the recent progress and discuss future directions in stem cell and neuro/gliogenesis biology by introducing several topics presented at a joint meeting of the Japanese Association of Anatomists and the Physiological Society of Japan in 2015. Collectively, these topics indicated that neuro/gliogenesis from NSCs is a common event occurring in many brain regions at various ages in animals. Given that significant structural and functional changes in cells and neural networks are accompanied by neuro/gliogenesis from NSCs and the integration of newly generated cells into the network, stem cell and neuro/gliogenesis biology provides a good platform from which to develop an integrated understanding of the structural and functional plasticity that underlies the development of the CNS, its remodeling in adulthood, and the recovery from diseases that affect it.

  6. "Neural overlap of L1 and L2 semantic representations across visual and auditory modalities: a decoding approach".

    Science.gov (United States)

    Van de Putte, Eowyn; De Baene, Wouter; Price, Cathy J; Duyck, Wouter

    2018-05-01

    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Music training alters the course of adolescent auditory development

    Science.gov (United States)

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  8. Mature teratoma in association with neural tube defect (occipital encephalocele): series of four cases and review of the literature.

    Science.gov (United States)

    Goyal, Nishant; Singh, Pankaj Kumar; Kakkar, Aanchal; Sharma, Meher Chand; Mahapatra, Ashok Kumar

    2012-01-01

    Both occipital encephalocele and teratomas are midline congenital malformations. Encephalocele is a form of neural tube defect in which there is a congenital defect of the cranium through which occurs a protrusion of brain matter or meninges, while teratoma is a tumor derived from all three germ layers. The association between occipital encephalocele and teratoma has not been reported to date. In the present study, the authors present a series of four such cases. Copyright © 2012 S. Karger AG, Basel.

  9. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory

    Directory of Open Access Journals (Sweden)

    Paul Miller

    2010-06-01

    Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.

  10. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  11. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  12. Direct and efficient transfection of mouse neural stem cells and mature neurons by in vivo mRNA electroporation.

    Science.gov (United States)

    Bugeon, Stéphane; de Chevigny, Antoine; Boutin, Camille; Coré, Nathalie; Wild, Stefan; Bosio, Andreas; Cremer, Harold; Beclin, Christophe

    2017-11-01

    In vivo brain electroporation of DNA expression vectors is a widely used method for lineage and gene function studies in the developing and postnatal brain. However, transfection efficiency of DNA is limited and adult brain tissue is refractory to electroporation. Here, we present a systematic study of mRNA as a vector for acute genetic manipulation in the developing and adult brain. We demonstrate that mRNA electroporation is far more efficient than DNA electroporation, and leads to faster and more homogeneous protein expression in vivo Importantly, mRNA electroporation allows the manipulation of neural stem cells and postmitotic neurons in the adult brain using minimally invasive procedures. Finally, we show that this approach can be efficiently used for functional studies, as exemplified by transient overexpression of the neurogenic factor Myt1l and by stably inactivating Dicer nuclease in vivo in adult born olfactory bulb interneurons and in fully integrated cortical projection neurons. © 2017. Published by The Company of Biologists Ltd.

  13. Seizure induces activation of multiple subtypes of neural progenitors and growth factors in hippocampus with neuronal maturation confined to dentate gyrus

    Energy Technology Data Exchange (ETDEWEB)

    Indulekha, Chandrasekharan L.; Sanalkumar, Rajendran [Neuro Stem Cell Biology Laboratory, Department of Neurobiology, Rajiv Gandhi Center for Biotechnology, Thiruvananthapuram, Kerala 695 014 (India); Thekkuveettil, Anoopkumar [Molecular Medicine, Biomedical Technology Wing, Sree Chitra Thirunal Institute for Medical Sciences and Technology, Thiruvananthapuram, Kerala (India); James, Jackson, E-mail: jjames@rgcb.res.in [Neuro Stem Cell Biology Laboratory, Department of Neurobiology, Rajiv Gandhi Center for Biotechnology, Thiruvananthapuram, Kerala 695 014 (India)

    2010-03-19

    Adult hippocampal neurogenesis is altered in response to different physiological and pathological stimuli. GFAP{sup +ve}/nestin{sup +ve} radial glial like Type-1 progenitors are considered to be the resident stem cell population in adult hippocampus. During neurogenesis these Type-1 progenitors matures to GFAP{sup -ve}/nestin{sup +ve} Type-2 progenitors and then to Type-3 neuroblasts and finally differentiates into granule cell neurons. In our study, using pilocarpine-induced seizure model, we showed that seizure initiated activation of multiple progenitors in the entire hippocampal area such as DG, CA1 and CA3. Seizure induction resulted in activation of two subtypes of Type-1 progenitors, Type-1a (GFAP{sup +ve}/nestin{sup +ve}/BrdU{sup +ve}) and Type-1b (GFAP{sup +ve}/nestin{sup +ve}/BrdU{sup -ve}). We showed that majority of Type-1b progenitors were undergoing only a transition from a state of dormancy to activated form immediately after seizures rather than proliferating, whereas Type-1a showed maximum proliferation by 3 days post-seizure induction. Type-2 (GFAP{sup -ve}/nestin{sup +ve}/BrdU{sup +ve}) progenitors were few compared to Type-1. Type-3 (DCX{sup +ve}) progenitors showed increased expression of immature neurons only in DG region by 3 days after seizure induction indicating maturation of progenitors happens only in microenvironment of DG even though progenitors are activated in CA1 and CA3 regions of hippocampus. Also parallel increase in growth factors expression after seizure induction suggests that microenvironmental niche has a profound effect on stimulation of adult neural progenitors.

  14. [Characterization of stem cells derived from the neonatal auditory sensory epithelium].

    Science.gov (United States)

    Diensthuber, M; Heller, S

    2010-11-01

    In contrast to regenerating hair cell-bearing organs of nonmammalian vertebrates the adult mammalian organ of Corti appears to have lost its ability to maintain stem cells. The result is a lack of regenerative ability and irreversible hearing loss following auditory hair cell death. Unexpectedly, the neonatal auditory sensory epithelium has recently been shown to harbor cells with stem cell features. The origin of these cells within the cochlea's sensory epithelium is unknown. We applied a modified neurosphere assay to identify stem cells within distinct subregions of the neonatal mouse auditory sensory epithelium. Sphere cells were characterized by multiple markers and morphologic techniques. Our data reveal that both the greater and the lesser epithelial ridge contribute to the sphere-forming stem cell population derived from the auditory sensory epithelium. These self-renewing sphere cells express a variety of markers for neural and otic progenitor cells and mature inner ear cell types. Stem cells can be isolated from specific regions of the auditory sensory epithelium. The distinct features of these cells imply a potential application in the development of a cell replacement therapy to regenerate the damaged sensory epithelium.

  15. Effects of maturation and acidosis on the chaos-like complexity of the neural respiratory output in the isolated brainstem of the tadpole, Rana esculenta.

    Science.gov (United States)

    Straus, Christian; Samara, Ziyad; Fiamma, Marie-Noëlle; Bautin, Nathalie; Ranohavimparany, Anja; Le Coz, Patrick; Golmard, Jean-Louis; Darré, Pierre; Zelter, Marc; Poon, Chi-Sang; Similowski, Thomas

    2011-05-01

    Human ventilation at rest exhibits mathematical chaos-like complexity that can be described as long-term unpredictability mediated (in whole or in part) by some low-dimensional nonlinear deterministic process. Although various physiological and pathological situations can affect respiratory complexity, the underlying mechanisms remain incompletely elucidated. If such chaos-like complexity is an intrinsic property of central respiratory generators, it should appear or increase when these structures mature or are stimulated. To test this hypothesis, we employed the isolated tadpole brainstem model [Rana (Pelophylax) esculenta] and recorded the neural respiratory output (buccal and lung rhythms) of pre- (n = 8) and postmetamorphic tadpoles (n = 8), at physiologic (7.8) and acidic pH (7.4). We analyzed the root mean square of the cranial nerve V or VII neurograms. Development and acidosis had no effect on buccal period. Lung frequency increased with development (P acidosis, but in postmetamorphic tadpoles only (P respiratory central rhythm generator accounts for ventilatory chaos-like complexity, especially in the postmetamorphic stage and at low pH. According to the ventilatory generators homology theory, this may also be the case in mammals.

  16. Effects of maturation and acidosis on the chaos-like complexity of the neural respiratory output in the isolated brainstem of the tadpole, Rana esculenta

    Science.gov (United States)

    Samara, Ziyad; Fiamma, Marie-Noëlle; Bautin, Nathalie; Ranohavimparany, Anja; Le Coz, Patrick; Golmard, Jean-Louis; Darré, Pierre; Zelter, Marc; Poon, Chi-Sang; Similowski, Thomas

    2011-01-01

    Human ventilation at rest exhibits mathematical chaos-like complexity that can be described as long-term unpredictability mediated (in whole or in part) by some low-dimensional nonlinear deterministic process. Although various physiological and pathological situations can affect respiratory complexity, the underlying mechanisms remain incompletely elucidated. If such chaos-like complexity is an intrinsic property of central respiratory generators, it should appear or increase when these structures mature or are stimulated. To test this hypothesis, we employed the isolated tadpole brainstem model [Rana (Pelophylax) esculenta] and recorded the neural respiratory output (buccal and lung rhythms) of pre- (n = 8) and postmetamorphic tadpoles (n = 8), at physiologic (7.8) and acidic pH (7.4). We analyzed the root mean square of the cranial nerve V or VII neurograms. Development and acidosis had no effect on buccal period. Lung frequency increased with development (P Chaos-like complexity, assessed through the noise limit, increased from pH 7.8 to pH 7.4 (P chaos-like complexity, especially in the postmetamorphic stage and at low pH. According to the ventilatory generators homology theory, this may also be the case in mammals. PMID:21325645

  17. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  18. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  19. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  20. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  1. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  2. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  3. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  4. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  5. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. The impact of visual gaze direction on auditory object tracking

    OpenAIRE

    Pomper, U.; Chait, M.

    2017-01-01

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention wh...

  7. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  8. Individual Differences in Neural Mechanisms of Selective Auditory Attention in Preschoolers from Lower Socioeconomic Status Backgrounds: An Event-Related Potentials Study

    Science.gov (United States)

    Isbell, Elif; Wray, Amanda Hampton; Neville, Helen J.

    2016-01-01

    Selective attention, the ability to enhance the processing of particular input while suppressing the information from other concurrent sources, has been postulated to be a foundational skill for learning and academic achievement. The neural mechanisms of this foundational ability are both vulnerable and enhanceable in children from lower…

  9. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  10. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  11. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  13. Central auditory neurons have composite receptive fields.

    Science.gov (United States)

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-02

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.

  14. Neuropsychopharmacology of auditory hallucinations: insights from pharmacological functional MRI and perspectives for future research.

    Science.gov (United States)

    Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina

    2013-01-01

    Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.

  15. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  16. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  17. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  18. Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi

    2016-01-01

    Recent advancements in set theory and readily available software have enabled social science researchers to bridge the variable-centered quantitative and case-based qualitative methodological paradigms in order to analyze multi-dimensional associations beyond the linearity assumptions, aggregate...... effects, unicausal reduction, and case specificity. Based on the developments in set theoretical thinking in social sciences and employing methods like Qualitative Comparative Analysis (QCA), Necessary Condition Analysis (NCA), and set visualization techniques, in this position paper, we propose...... and demonstrate a new approach to maturity models in the domain of Information Systems. This position paper describes the set-theoretical approach to maturity models, presents current results and outlines future research work....

  19. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex.

    Science.gov (United States)

    Sussman, Elyse; Steinschneider, Mitchell

    2006-02-23

    Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.

  20. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  1. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  2. Attention, awareness, and the perception of auditory scenes

    Directory of Open Access Journals (Sweden)

    Joel S Snyder

    2012-02-01

    Full Text Available Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.

  3. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  4. Neural plasticity and its initiating conditions in tinnitus.

    Science.gov (United States)

    Roberts, L E

    2018-03-01

    Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.

  5. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    A CCD camera for image acquisition of the different green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize, Maturity, CCD ...

  6. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  7. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  8. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  9. Central auditory masking by an illusory tone.

    Directory of Open Access Journals (Sweden)

    Christopher J Plack

    Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  10. The effect of early visual deprivation on the neural bases of multisensory processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  12. Beyond maturity

    International Nuclear Information System (INIS)

    Tessmer, W.B.

    1990-01-01

    The Nuclear Power Plant Simulator Industry has undergone to decades of evolution in experience, technology and business practices. Link-Miles Simulation Corporation (LMSC) has been contracted to build 68 Full Scope Nuclear Simulators during the 1970's and 1980's. Traditional approaches to design, development and testing have been used to satisfy specifications for initial customer requirements. However, the Industry has matured. All U.S. Nuclear Utilities own, or have under contract, at least one simulator. Other industrial nations have centralized training facilities to satisfy the simulator training needs. The customer of the future is knowledgeable and experienced in the development and service of nuclear simulators. The role of the simulator vendor is changing in order to alter the traditional approach for development. Covenants between the vendors and their customers solidify new complementary roles. This paper presents examples of current simulator project development with recommendations for future endeavors

  13. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  14. Abnormal synchrony and effective connectivity in patients with schizophrenia and auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Maria de la Iglesia-Vaya

    2014-01-01

    These data indicate that an anomalous process of neural connectivity exists when patients with AH process emotional auditory stimuli. Additionally, a central role is suggested for the cerebellum in processing emotional stimuli in patients with persistent AH.

  15. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.

    2008-01-01

    consistent with the perceptual data obtained with the same stimuli and with results from simulations of neural activity at the output of cochlear preprocessing. These findings demonstrate that phase effects in peripheral auditory processing are accurately reflected up to the level of the auditory cortex....

  16. Auditory object perception: A neurobiological model and prospective review.

    Science.gov (United States)

    Brefczynski-Lewis, Julie A; Lewis, James W

    2017-10-01

    Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic

  17. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  19. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  20. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials.

    Science.gov (United States)

    Calderón-Garcidueñas, Lilian; D'Angiulli, Amedeo; Kulesza, Randy J; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-06-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3±8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p7.501; p<0.0001), consisting with delayed central conduction time of brainstem neural transmission. Highly exposed children showed significant evidence of inflammatory markers and their auditory and vestibular nuclei accumulated α synuclein and/or β amyloid(1-42). Medial superior olive neurons, critically involved in BAEPs, displayed significant pathology. Children's exposure to urban air pollution increases their risk for auditory and vestibular impairment. Copyright © 2011 ISDN. Published by Elsevier Ltd. All rights reserved.

  1. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  2. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  3. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  4. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  5. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  6. Intracerebral evidence of rhythm transform in the human auditory cortex.

    Science.gov (United States)

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  7. Auditory Screening in Infants for Early Detection of Permanent ...

    African Journals Online (AJOL)

    and acquired hearing loss in newborns and children can lead to deficiencies and ... neonatal intensive care unit, pre-maturity and birth weight less than 1,500 g. .... outer, middle and inner ear, and lower auditory pathway. These screening ...

  8. Deciphering the Cognitive and Neural Mechanisms Underlying ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Deciphering the Cognitive and Neural Mechanisms Underlying Auditory Learning. This project seeks to understand the brain mechanisms necessary for people to learn to perceive sounds. Neural circuits and learning. The research team will test people with and without musical training to evaluate their capacity to learn ...

  9. Binaural processing by the gecko auditory periphery.

    Science.gov (United States)

    Christensen-Dalsgaard, Jakob; Tang, Yezhong; Carr, Catherine E

    2011-05-01

    Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼ 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200-500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional.

  10. Implantable Neural Interfaces for Sharks

    Science.gov (United States)

    2007-05-01

    technology for recording and stimulating from the auditory and olfactory sensory nervous systems of the awake, swimming nurse shark , G. cirratum (Figures...overlay of the central nervous system of the nurse shark on a horizontal MR image. Implantable Neural Interfaces for Sharks ...Neural Interfaces for Characterizing Population Responses to Odorants and Electrical Stimuli in the Nurse Shark , Ginglymostoma cirratum.” AChemS Abs

  11. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  12. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  13. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  14. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  15. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.

  16. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  17. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  18. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  19. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  20. Auditory Brain Stem Processing in Reptiles and Amphibians: Roles of Coupled Ears

    DEFF Research Database (Denmark)

    Willis, Katie L.; Christensen-Dalsgaard, Jakob; Carr, Catherine

    2014-01-01

    Comparative approaches to the auditory system have yielded great insight into the evolution of sound localization circuits, particularly within the nonmammalian tetrapods. The fossil record demonstrates multiple appearances of tympanic hearing, and examination of the auditory brain stem of various...... groups can reveal the organizing effects of the ear across taxa. If the peripheral structures have a strongly organizing influence on the neural structures, then homologous neural structures should be observed only in groups with a homologous tympanic ear. Therefore, the central auditory systems...... of anurans (frogs), reptiles (including birds), and mammals should all be more similar within each group than among the groups. Although there is large variation in the peripheral auditory system, there is evidence that auditory brain stem nuclei in tetrapods are homologous and have similar functions among...

  1. Vestibular hearing and neural synchronization.

    Science.gov (United States)

    Emami, Seyede Faranak; Daneshi, Ahmad

    2012-01-01

    Objectives. Vestibular hearing as an auditory sensitivity of the saccule in the human ear is revealed by cervical vestibular evoked myogenic potentials (cVEMPs). The range of the vestibular hearing lies in the low frequency. Also, the amplitude of an auditory brainstem response component depends on the amount of synchronized neural activity, and the auditory nerve fibers' responses have the best synchronization with the low frequency. Thus, the aim of this study was to investigate correlation between vestibular hearing using cVEMPs and neural synchronization via slow wave Auditory Brainstem Responses (sABR). Study Design. This case-control survey was consisted of twenty-two dizzy patients, compared to twenty healthy controls. Methods. Intervention comprised of Pure Tone Audiometry (PTA), Impedance acoustic metry (IA), Videonystagmography (VNG), fast wave ABR (fABR), sABR, and cVEMPs. Results. The affected ears of the dizzy patients had the abnormal findings of cVEMPs (insecure vestibular hearing) and the abnormal findings of sABR (decreased neural synchronization). Comparison of the cVEMPs at affected ears versus unaffected ears and the normal persons revealed significant differences (P < 0.05). Conclusion. Safe vestibular hearing was effective in the improvement of the neural synchronization.

  2. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  3. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Science.gov (United States)

    Wengenroth, Martina; Blatow, Maria; Bendszus, Martin; Schneider, Peter

    2010-08-23

    Individuals with the rare genetic disorder Williams-Beuren syndrome (WS) are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  4. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  5. An Adaptive Neural Mechanism with a Lizard Ear Model for Binaural Acoustic Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2016-01-01

    expensive algorithms. We present a novel bioinspired solution to acoustic tracking that uses only two microphones. The system is based on a neural mechanism coupled with a model of the peripheral auditory system of lizards. The peripheral auditory model provides sound direction information which the neural...

  6. Encoding of natural and artificial stimuli in the auditory midbrain

    Science.gov (United States)

    Lyzwa, Dominika

    How complex acoustic stimuli are encoded in the main center of convergence in the auditory midbrain is not clear. Here, the representation of neural spiking responses to natural and artificial sounds across this subcortical structure is investigated based on neurophysiological recordings from the mammalian midbrain. Neural and stimulus correlations of neuronal pairs are analyzed with respect to the neurons' distance, and responses to different natural communication sounds are discriminated. A model which includes linear and nonlinear neural response properties of this nucleus is presented and employed to predict temporal spiking responses to new sounds. Supported by BMBF Grant 01GQ0811.

  7. Auditory maturation and congenital hearing loss in NICU infants

    OpenAIRE

    Coenraad, Saskia

    2011-01-01

    textabstractThe number of preterm births has increased over the past decades as a result of increasing maternal age and in vitro fertilization (1). At the same time the survival of preterm infants has increased due to advances in perinatal and neonatal care. For example, antenatal corticosteroids for women with threatened preterm delivery, high-frequency oscillatory ventilation and inhaled nitric oxide have now become standard therapy (1). Unfortunately, these improvements sometimes come at a...

  8. Auditory maturation and congenital hearing loss in NICU infants

    NARCIS (Netherlands)

    S. Coenraad (Saskia)

    2011-01-01

    textabstractThe number of preterm births has increased over the past decades as a result of increasing maternal age and in vitro fertilization (1). At the same time the survival of preterm infants has increased due to advances in perinatal and neonatal care. For example, antenatal corticosteroids

  9. The function of BDNF in the adult auditory system.

    Science.gov (United States)

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Pitch perception prior to cortical maturation

    Science.gov (United States)

    Lau, Bonnie K.

    Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.

  11. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  12. Prestimulus subsequent memory effects for auditory and visual events.

    Science.gov (United States)

    Otten, Leun J; Quayle, Angela H; Puvaneswaran, Bhamini

    2010-06-01

    It has been assumed that the effective encoding of information into memory primarily depends on neural activity elicited when an event is initially encountered. Recently, it has been shown that memory formation also relies on neural activity just before an event. The precise role of such activity in memory is currently unknown. Here, we address whether prestimulus activity affects the encoding of auditory and visual events, is set up on a trial-by-trial basis, and varies as a function of the type of recognition judgment an item later receives. Electrical brain activity was recorded from the scalps of 24 healthy young adults while they made semantic judgments on randomly intermixed series of visual and auditory words. Each word was preceded by a cue signaling the modality of the upcoming word. Auditory words were preceded by auditory cues and visual words by visual cues. A recognition memory test with remember/know judgments followed after a delay of about 45 min. As observed previously, a negative-going, frontally distributed modulation just before visual word onset predicted later recollection of the word. Crucially, the same effect was found for auditory words and observed on stay as well as switch trials. These findings emphasize the flexibility and general role of prestimulus activity in memory formation, and support a functional interpretation of the activity in terms of semantic preparation. At least with an unpredictable trial sequence, the activity is set up anew on each trial.

  13. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Frederic Venail

    2015-01-01

    Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  14. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  15. Hearing loss impacts neural alpha oscillations under adverse listening conditions

    OpenAIRE

    Petersen, Eline B.; Wöstmann, Malte; Obleser, Jonas; Stenfelt, Stefan; Lunner, Thomas

    2015-01-01

    Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory (WM). One neural signature of WM load is enhanced power of alpha oscillations (6–12 Hz). However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of WM when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and backgrou...

  16. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  17. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  18. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  20. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  1. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  2. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  3. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    Science.gov (United States)

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  4. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  5. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  6. Maturity and maturity models in lean construction

    Directory of Open Access Journals (Sweden)

    Claus Nesensohn

    2014-03-01

    Full Text Available In recent years there has been an increasing interest in maturity models in management-related disciplines; which reflects a growing recognition that becoming more mature and having a model to guide the route to maturity can help organisations in managing major transformational change. Lean Construction (LC is an increasingly important improvement approach that organisations seek to embed. This study explores how to apply the maturity models to LC. Hence the attitudes, opinions and experiences of key industry informants with high levels of knowledge of LC were investigated. To achieve this, a review of maturity models was conducted, and data for the analysis was collected through a sequential process involving three methods. First a group interview with seven key informants. Second a follow up discussion with the same individuals to investigate some of the issues raised in more depth. Third an online discussion held via LinkedIn in which members shared their views on some of the results. Overall, we found that there is a lack of common understanding as to what maturity means in LC, though there is general agreement that the concept of maturity is a suitable one to reflect the path of evolution for LC within organisations.

  7. Slab replacement maturity guidelines.

    Science.gov (United States)

    2014-04-01

    This study investigated the use of maturity method to determine early age strength of concrete in slab : replacement application. Specific objectives were (1) to evaluate effects of various factors on the compressive : maturity-strength relationship ...

  8. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms.

    Science.gov (United States)

    Bizley, Jennifer K; Maddox, Ross K; Lee, Adrian K C

    2016-02-01

    Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  10. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  11. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  12. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  13. Opposite patterns of hemisphere dominance for early auditory processing of lexical tones and consonants

    OpenAIRE

    Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin

    2006-01-01

    in tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually ...

  14. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  15. Memory for sound, with an ear toward hearing in complex auditory scenes.

    Science.gov (United States)

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  16. Inter-trial coherence as a marker of cortical phase synchrony in children with sensorineural hearing loss and auditory neuropathy spectrum disorder fitted with hearing aids and cochlear implants

    Science.gov (United States)

    Nash-Kille, Amy; Sharma, Anu

    2014-01-01

    Objective Although brainstem dys-synchrony is a hallmark of children with auditory neuropathy spectrum disorder (ANSD), little is known about how the lack of neural synchrony manifests at more central levels. We used time-frequency single-trial EEG analyses (i.e., inter-trial coherence; ITC), to examine cortical phase synchrony in children with normal hearing (NH), sensorineural hearing loss (SNHL) and ANSD. Methods Single trial time-frequency analyses were performed on cortical auditory evoked responses from 41 NH children, 91 children with ANSD and 50 children with SNHL. The latter two groups included children who received intervention via hearing aids and cochlear implants. ITC measures were compared between groups as a function of hearing loss, intervention type, and cortical maturational status. Results In children with SNHL, ITC decreased as severity of hearing loss increased. Children with ANSD revealed lower levels of ITC relative to children with NH or SNHL, regardless of intervention. Children with ANSD who received cochlear implants showed significant improvements in ITC with increasing experience with their implants. Conclusions Cortical phase coherence is significantly reduced as a result of both severe-to-profound SNHL and ANSD. Significance ITC provides a window into the brain oscillations underlying the averaged cortical auditory evoked response. Our results provide a first description of deficits in cortical phase synchrony in children with SNHL and ANSD. PMID:24360131

  17. The Auditory Enhancement Effect is Not Reflected in the 80-Hz Auditory Steady-State Response

    OpenAIRE

    Carcagno, Samuele; Plack, Christopher J.; Portron, Arthur; Semal, Catherine; Demany, Laurent

    2014-01-01

    The perceptual salience of a target tone presented in a multitone background is increased by the presentation of a precursor sound consisting of the multitone background alone. It has been proposed that this “enhancement” phenomenon results from an effective amplification of the neural response to the target tone. In this study, we tested this hypothesis in humans, by comparing the auditory steady-state response (ASSR) to a target tone that was enhanced by a precursor sound with the ASSR to a...

  18. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  19. Delayed Mismatch Field Latencies in Autism Spectrum Disorder with Abnormal Auditory Sensitivity: A Magnetoencephalographic Study.

    Science.gov (United States)

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Sugata, Hisato; Hanaie, Ryuzo; Nagatani, Fumiyo; Yamamoto, Tomoka; Tachibana, Masaya; Tominaga, Koji; Hirata, Masayuki; Mohri, Ikuko; Taniike, Masako

    2017-01-01

    Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD), the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG) to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz) in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years) and 13 typically developing boys (mean age, 9.45 ± 1.51 years). We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.

  20. Delayed Mismatch Field Latencies in Autism Spectrum Disorder with Abnormal Auditory Sensitivity: A Magnetoencephalographic Study

    Directory of Open Access Journals (Sweden)

    Junko Matsuzaki

    2017-09-01

    Full Text Available Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD, the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years and 13 typically developing boys (mean age, 9.45 ± 1.51 years. We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.

  1. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  2. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  3. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  4. Insult-induced adaptive plasticity of the auditory system

    Directory of Open Access Journals (Sweden)

    Joshua R Gold

    2014-05-01

    Full Text Available The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighting of connections in neural networks putatively required for optimising performance and behaviour. As an avenue for investigation, studies centred around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple – if not all – levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioural implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism’s competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.

  5. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  6. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  7. ORGANIZATIONAL PROJECT MANAGEMENT MATURITY

    Directory of Open Access Journals (Sweden)

    Yana Derenskaya

    2017-11-01

    Full Text Available The present article is aimed at developing a set of recommendations for achieving a higher level of organizational project maturity at a given enterprise. Methodology. For the purposes of the current research, the available information sources on the components of project management system are analysed; the essence of “organizational maturity” and the existing models of organizational maturity are studied. The method of systemic and structural analysis, as well as the method of logical generalization, are employed in order to study the existing models of organizational maturity, to describe levels of organizational maturity, and finally to develop a set of methodological recommendations for achieving a higher level of organizational project maturity at a given enterprise. The results of the research showed that the core elements of project management system are methodological, organizational, programtechnical, and motivational components. Project management encompasses a wide range of issues connected with organizational structure, project team, communication management, project participants, etc. However, the fundamental basis for developing project management concept within a given enterprise starts with defining its level of organizational maturity. The present paper describes various models of organizational maturity (staged, continuous, petal-shaped and their common types (H. Кеrzner Organizational Maturity Model, Berkeley PM Maturity Model, Organizational Project Management Maturity Model, Portfolio, Program & Project Management Maturity Model. The analysis of available theoretic works showed that the notion “organizational project maturity” refers to the capability of an enterprise to select projects and manage them with the intention of achieving its strategic goals in the most effective way. Importantly, the level of maturity can be improved by means of formalizing the acquired knowledge, regulating project-related activities

  8. Development of a wireless system for auditory neuroscience.

    Science.gov (United States)

    Lukes, A J; Lear, A T; Snider, R K

    2001-01-01

    In order to study how the auditory cortex extracts communication sounds in a realistic acoustic environment, a wireless system is being developed that will transmit acoustic as well as neural signals. The miniature transmitter will be capable of transmitting two acoustic signals with 37.5 KHz bandwidths (75 KHz sample rate) and 56 neural signals with bandwidths of 9.375 KHz (18.75 KHz sample rate). These signals will be time-division multiplexed into one high bandwidth signal with a 1.2 MHz sample rate. This high bandwidth signal will then be frequency modulated onto a 2.4 GHz carrier, which resides in the industrial, scientic, and medical (ISM) band that is designed for low-power short-range wireless applications. On the receiver side, the signal will be demodulated from the 2.4 GHz carrier and then digitized by an analog-to-digital (A/D) converter. The acoustic and neural signals will be digitally demultiplexed from the multiplexed signal into their respective channels. Oversampling (20 MHz) will allow the reconstruction of the multiplexing clock by a digital signal processor (DSP) that will perform frame and bit synchronization. A frame is a subset of the signal that contains all the channels and several channels tied high and low will signal the start of a frame. This technological development will bring two benefits to auditory neuroscience. It will allow simultaneous recording of many neurons that will permit studies of population codes. It will also allow neural functions to be determined in higher auditory areas by correlating neural and acoustic signals without apriori knowledge of the necessary stimuli.

  9. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    Science.gov (United States)

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  11. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.

  12. Salicylate-Induced Auditory Perceptual Disorders and Plastic Changes in Nonclassical Auditory Centers in Rats

    Directory of Open Access Journals (Sweden)

    Guang-Di Chen

    2014-01-01

    Full Text Available Previous studies have shown that sodium salicylate (SS activates not only central auditory structures, but also nonauditory regions associated with emotion and memory. To identify electrophysiological changes in the nonauditory regions, we recorded sound-evoked local field potentials and multiunit discharges from the striatum, amygdala, hippocampus, and cingulate cortex after SS-treatment. The SS-treatment produced behavioral evidence of tinnitus and hyperacusis. Physiologically, the treatment significantly enhanced sound-evoked neural activity in the striatum, amygdala, and hippocampus, but not in the cingulate. The enhanced sound evoked response could be linked to the hyperacusis-like behavior. Further analysis showed that the enhancement of sound-evoked activity occurred predominantly at the midfrequencies, likely reflecting shifts of neurons towards the midfrequency range after SS-treatment as observed in our previous studies in the auditory cortex and amygdala. The increased number of midfrequency neurons would lead to a relative higher number of total spontaneous discharges in the midfrequency region, even though the mean discharge rate of each neuron may not increase. The tonotopical overactivity in the midfrequency region in quiet may potentially lead to tonal sensation of midfrequency (the tinnitus. The neural changes in the amygdala and hippocampus may also contribute to the negative effect that patients associate with their tinnitus.

  13. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  14. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2018-02-15

    contrast, beta oscillatory activity in the auditory task, which varied as a function of SNR in all groups, was overall enhanced in congenital cataract reversal individuals. These results suggest that intramodal plasticity elicited by a transient phase of blindness was maintained and might mediate the prevailing auditory processing advantages in congenital cataract reversal individuals. By contrast, auditory and visual motion processing do not seem to compete for the same neural resources. We speculate that incomplete visual recovery is due to impaired neural network turning which seems to depend on early visual input. The present results demonstrate a privilege of the first arriving input for shaping neural circuits mediating both auditory and visual functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Auditory changes in acromegaly.

    Science.gov (United States)

    Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E

    2017-06-01

    The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.

  16. The auditory enhancement effect is not reflected in the 80-Hz auditory steady-state response.

    Science.gov (United States)

    Carcagno, Samuele; Plack, Christopher J; Portron, Arthur; Semal, Catherine; Demany, Laurent

    2014-08-01

    The perceptual salience of a target tone presented in a multitone background is increased by the presentation of a precursor sound consisting of the multitone background alone. It has been proposed that this "enhancement" phenomenon results from an effective amplification of the neural response to the target tone. In this study, we tested this hypothesis in humans, by comparing the auditory steady-state response (ASSR) to a target tone that was enhanced by a precursor sound with the ASSR to a target tone that was not enhanced. In order to record neural responses originating in the brainstem, the ASSR was elicited by amplitude modulating the target tone at a frequency close to 80 Hz. The results did not show evidence of an amplified neural response to enhanced tones. In a control condition, we measured the ASSR to a target tone that, instead of being perceptually enhanced by a precursor sound, was acoustically increased in level. This level increase matched the magnitude of enhancement estimated psychophysically with a forward masking paradigm in a previous experimental phase. We found that the ASSR to the tone acoustically increased in level was significantly greater than the ASSR to the tone enhanced by the precursor sound. Overall, our results suggest that the enhancement effect cannot be explained by an amplified neural response at the level of the brainstem. However, an alternative possibility is that brainstem neurons with enhanced responses do not contribute to the scalp-recorded ASSR.

  17. Leftward lateralization of auditory cortex underlies holistic sound perception in Williams syndrome.

    Directory of Open Access Journals (Sweden)

    Martina Wengenroth

    Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.

  18. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  19. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  20. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia.

    Science.gov (United States)

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia

    2016-11-01

    Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  2. Bottom-up influences of voice continuity in focusing selective auditory attention

    OpenAIRE

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the...

  3. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  4. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  5. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  6. Auditory brainstem activity and development evoked by apical versus basal cochlear implant electrode stimulation in children.

    Science.gov (United States)

    Gordon, K A; Papsin, B C; Harrison, R V

    2007-08-01

    The role of apical versus basal cochlear implant electrode stimulation on central auditory development was examined. We hypothesized that, in children with early onset deafness, auditory development evoked by basal electrode stimulation would differ from that evoked more apically. Responses of the auditory nerve and brainstem, evoked by an apical and a basal implant electrode, were measured over the first year of cochlear implant use in 50 children with early onset severe to profound deafness who used hearing aids prior to implantation. Responses at initial stimulation were of larger amplitude and shorter latency when evoked by the apical electrode. No significant effects of residual hearing or age were found on initial response amplitudes or latencies. With implant use, responses evoked by both electrodes showed decreases in wave and interwave latencies reflecting decreased neural conduction time through the brainstem. Apical versus basal differences persisted with implant experience with one exception; eIII-eV interlatency differences decreased with implant use. Acute stimulation shows prolongation of basally versus apically evoked auditory nerve and brainstem responses in children with severe to profound deafness. Interwave latencies reflecting neural conduction along the caudal and rostral portions of the brainstem decreased over the first year of implant use. Differences in neural conduction times evoked by apical versus basal electrode stimulation persisted in the caudal but not rostral brainstem. Activity-dependent changes of the auditory brainstem occur in response to both apical and basal cochlear implant electrode stimulation.

  7. Size and synchronization of auditory cortex promotes musical, literacy, and attentional skills in children.

    Science.gov (United States)

    Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter

    2014-08-13

    Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.

  8. Discussion: Changes in Vocal Production and Auditory Perception after Hair Cell Regeneration.

    Science.gov (United States)

    Ryals, Brenda M.; Dooling, Robert J.

    2000-01-01

    A bird study found that with sufficient time and training after hair cell and hearing loss and hair cell regeneration, the mature avian auditory system can accommodate input from a newly regenerated periphery sufficiently to allow for recognition of previously familiar vocalizations and the learning of new complex acoustic classifications.…

  9. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study.

    Science.gov (United States)

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (p<0.01, 75.6%). There were no significant correlations between the GIN test and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.

  10. An Initial Investigation of the Neural Correlates of Word Processing in Preschoolers With Specific Language Impairment.

    Science.gov (United States)

    Haebig, Eileen; Leonard, Laurence; Usler, Evan; Deevy, Patricia; Weber, Christine

    2018-03-15

    Previous behavioral studies have found deficits in lexical-semantic abilities in children with specific language impairment (SLI), including reduced depth and breadth of word knowledge. This study explored the neural correlates of early emerging familiar word processing in preschoolers with SLI and typical development. Fifteen preschoolers with typical development and 15 preschoolers with SLI were presented with pictures followed after a brief delay by an auditory label that did or did not match. Event-related brain potentials were time locked to the onset of the auditory labels. Children provided verbal judgments of whether the label matched the picture. There were no group differences in the accuracy of identifying when pictures and labels matched or mismatched. Event-related brain potential data revealed that mismatch trials elicited a robust N400 in both groups, with no group differences in mean amplitude or peak latency. However, the typically developing group demonstrated a more robust late positive component, elicited by mismatch trials. These initial findings indicate that lexical-semantic access of early acquired words, indexed by the N400, does not differ between preschoolers with SLI and typical development when highly familiar words are presented in isolation. However, the typically developing group demonstrated a more mature profile of postlexical reanalysis and integration, indexed by an emerging late positive component. The findings lay the necessary groundwork for better understanding processing of newly learned words in children with SLI.

  11. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  12. The impact of auditory working memory training on the fronto-parietal working memory network.

    Science.gov (United States)

    Schneiders, Julia A; Opitz, Bertram; Tang, Huijun; Deng, Yuan; Xie, Chaoxiang; Li, Hong; Mecklinger, Axel

    2012-01-01

    Working memory training has been widely used to investigate working memory processes. We have shown previously that visual working memory benefits only from intra-modal visual but not from across-modal auditory working memory training. In the present functional magnetic resonance imaging study we examined whether auditory working memory processes can also be trained specifically and which training-induced activation changes accompany theses effects. It was investigated whether working memory training with strongly distinct auditory materials transfers exclusively to an auditory (intra-modal) working memory task or whether it generalizes to a (across-modal) visual working memory task. We used adaptive n-back training with tonal sequences and a passive control condition. The memory training led to a reliable training gain. Transfer effects were found for the (intra-modal) auditory but not for the (across-modal) visual transfer task. Training-induced activation decreases in the auditory transfer task were found in two regions in the right inferior frontal gyrus. These effects confirm our previous findings in the visual modality and extents intra-modal effects in the prefrontal cortex to the auditory modality. As the right inferior frontal gyrus is frequently found in maintaining modality-specific auditory information, these results might reflect increased neural efficiency in auditory working memory processes. Furthermore, task-unspecific (amodal) activation decreases in the visual and auditory transfer task were found in the right inferior parietal lobule and the superior portion of the right middle frontal gyrus reflecting less demand on general attentional control processes. These data are in good agreement with amodal activation decreases within the same brain regions on a visual transfer task reported previously.

  13. The impact of auditory working memory training on the fronto-parietal working memory network

    Science.gov (United States)

    Schneiders, Julia A.; Opitz, Bertram; Tang, Huijun; Deng, Yuan; Xie, Chaoxiang; Li, Hong; Mecklinger, Axel

    2012-01-01

    Working memory training has been widely used to investigate working memory processes. We have shown previously that visual working memory benefits only from intra-modal visual but not from across-modal auditory working memory training. In the present functional magnetic resonance imaging study we examined whether auditory working memory processes can also be trained specifically and which training-induced activation changes accompany theses effects. It was investigated whether working memory training with strongly distinct auditory materials transfers exclusively to an auditory (intra-modal) working memory task or whether it generalizes to a (across-modal) visual working memory task. We used adaptive n-back training with tonal sequences and a passive control condition. The memory training led to a reliable training gain. Transfer effects were found for the (intra-modal) auditory but not for the (across-modal) visual transfer task. Training-induced activation decreases in the auditory transfer task were found in two regions in the right inferior frontal gyrus. These effects confirm our previous findings in the visual modality and extents intra-modal effects in the prefrontal cortex to the auditory modality. As the right inferior frontal gyrus is frequently found in maintaining modality-specific auditory information, these results might reflect increased neural efficiency in auditory working memory processes. Furthermore, task-unspecific (amodal) activation decreases in the visual and auditory transfer task were found in the right inferior parietal lobule and the superior portion of the right middle frontal gyrus reflecting less demand on general attentional control processes. These data are in good agreement with amodal activation decreases within the same brain regions on a visual transfer task reported previously. PMID:22701418

  14. The Impact of Auditory Working Memory Training on the Fronto-Parietal Working Memory Network

    Directory of Open Access Journals (Sweden)

    Julia eSchneiders

    2012-06-01

    Full Text Available Working memory training has been widely used to investigate working memory processes. We have shown previously that visual working memory benefits only from intra-modal visual but not from across-modal auditory working memory training. In the present functional magnetic resonance imaging study we examined whether auditory working memory processes can also be trained specifically and which training-induced activation changes accompany theses effects. It was investigated whether working memory training with strongly distinct auditory materials transfers exclusively to an auditory (intra-modal working memory task or whether it generalizes to an (across-modal visual working memory task. We used an adaptive n-back training with tonal sequences and a passive control condition. The memory training led to a reliable training gain. Transfer effects were found for the (intra-modal auditory but not for the (across-modal visual 2-back task. Training-induced activation changes in the auditory 2-back task were found in two regions in the right inferior frontal gyrus. These effects confirm our previous findings in the visual modality and extends intra-modal effects to the auditory modality. These results might reflect increased neural efficiency in auditory working memory processes as in the right inferior frontal gyrus is frequently found in maintaining modality-specific auditory information. By this, these effects are analogical to the activation decreases in the right middle frontal gyrus for the visual modality in our previous study. Furthermore, task-unspecific (across-modal activation decreases in the visual and auditory 2-back task were found in the right inferior parietal lobule and the superior portion of the right middle frontal gyrus reflecting less demands on general attentional control processes. These data are in good agreement with across-modal activation decreases within the same brain regions on a visual 2-back task reported previously.

  15. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2012-06-07

    In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Changes in otoacoustic emissions during selective auditory and visual attention.

    Science.gov (United States)

    Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis

    2015-05-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.

  17. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  18. Nonverbal auditory agnosia with lesion to Wernicke's area.

    Science.gov (United States)

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  19. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  20. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Directory of Open Access Journals (Sweden)

    Miguel Fernandez-Del-Olmo

    Full Text Available Fast reaction times and the ability to develop a high rate of force development (RFD are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS; a visual stimulus accompanied by a non-startle auditory stimulus (AS; and a visual stimulus accompanied by a startle auditory stimulus (SS. Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  1. Changes in otoacoustic emissions during selective auditory and visual attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2015-01-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703

  2. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  3. Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats

    Science.gov (United States)

    Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.

    2007-01-01

    The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…

  4. Hearing illusory sounds in noise: sensory-perceptual transformations in primary auditory cortex.

    NARCIS (Netherlands)

    Riecke, L.; Opstal, A.J. van; Goebel, R.; Formisano, E.

    2007-01-01

    A sound that is interrupted by silence is perceived as discontinuous. However, when the silence is replaced by noise, the target sound may be heard as uninterrupted. Understanding the neural basis of this continuity illusion may elucidate the ability to track sounds of interest in noisy auditory

  5. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    Science.gov (United States)

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  6. Effect of Auditory Training on Reading Comprehension of Children with Hearing Impairment in Enugu State

    Science.gov (United States)

    Ugwuanyi, L. T.; Adaka, T. A.

    2015-01-01

    The paper focused on the effect of auditory training on reading comprehension of children with hearing impairment in Enugu State. A total of 33 children with conductive, sensory neural and mixed hearing loss were sampled for the study in the two schools for the Deaf in Enugu State. The design employed for the study was a quasi experiment (pre-test…

  7. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  8. The singular nature of auditory and visual scene analysis in autism.

    Science.gov (United States)

    Lin, I-Fan; Shirama, Aya; Kato, Nobumasa; Kashino, Makio

    2017-02-19

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  9. Maturity of the PWR

    International Nuclear Information System (INIS)

    Bacher, P.; Rapin, M.; Aboudarham, L.; Bitsch, D.

    1983-03-01

    Figures illustrating the predominant position of the PWR system are presented. The question is whether on the basis of these figures the PWR can be considered to have reached maturity. The following analysis, based on the French program experience, is an attempt to pinpoint those areas in which industrial maturity of the PWR has been attained, and in which areas a certain evolution can still be expected to take place

  10. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  11. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  12. Acoustic Trauma Changes the Parvalbumin-Positive Neurons in Rat Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Congli Liu

    2018-01-01

    Full Text Available Acoustic trauma is being reported to damage the auditory periphery and central system, and the compromised cortical inhibition is involved in auditory disorders, such as hyperacusis and tinnitus. Parvalbumin-containing neurons (PV neurons, a subset of GABAergic neurons, greatly shape and synchronize neural network activities. However, the change of PV neurons following acoustic trauma remains to be elucidated. The present study investigated how auditory cortical PV neurons change following unilateral 1 hour noise exposure (left ear, one octave band noise centered at 16 kHz, 116 dB SPL. Noise exposure elevated the auditory brainstem response threshold of the exposed ear when examined 7 days later. More detectable PV neurons were observed in both sides of the auditory cortex of noise-exposed rats when compared to control. The detectable PV neurons of the left auditory cortex (ipsilateral to the exposed ear to noise exposure outnumbered those of the right auditory cortex (contralateral to the exposed ear. Quantification of Western blotted bands revealed higher expression level of PV protein in the left cortex. These findings of more active PV neurons in noise-exposed rats suggested that a compensatory mechanism might be initiated to maintain a stable state of the brain.

  13. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    OpenAIRE

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants wer...

  14. Sustained Firing of Model Central Auditory Neurons Yields a Discriminative Spectro-temporal Representation for Natural Sounds

    OpenAIRE

    Carlin, Michael A.; Elhilali, Mounya

    2013-01-01

    The processing characteristics of neurons in the central auditory system are directly shaped by and reflect the statistics of natural acoustic environments, but the principles that govern the relationship between natural sound ensembles and observed responses in neurophysiological studies remain unclear. In particular, accumulating evidence suggests the presence of a code based on sustained neural firing rates, where central auditory neurons exhibit strong, persistent responses to their prefe...

  15. Neural correlates of rhythmic expectancy

    Directory of Open Access Journals (Sweden)

    Theodore P. Zanto

    2006-01-01

    Full Text Available Temporal expectancy is thought to play a fundamental role in the perception of rhythm. This review summarizes recent studies that investigated rhythmic expectancy by recording neuroelectric activity with high temporal resolution during the presentation of rhythmic patterns. Prior event-related brain potential (ERP studies have uncovered auditory evoked responses that reflect detection of onsets, offsets, sustains,and abrupt changes in acoustic properties such as frequency, intensity, and spectrum, in addition to indexing higher-order processes such as auditory sensory memory and the violation of expectancy. In our studies of rhythmic expectancy, we measured emitted responses - a type of ERP that occurs when an expected event is omitted from a regular series of stimulus events - in simple rhythms with temporal structures typical of music. Our observations suggest that middle-latency gamma band (20-60 Hz activity (GBA plays an essential role in auditory rhythm processing. Evoked (phase-locked GBA occurs in the presence of physically presented auditory events and reflects the degree of accent. Induced (non-phase-locked GBA reflects temporally precise expectancies for strongly and weakly accented events in sound patterns. Thus far, these findings support theories of rhythm perception that posit temporal expectancies generated by active neural processes.

  16. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  17. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  18. Auditory evoked responses to binaural beat illusion: stimulus generation and the derivation of the Binaural Interaction Component (BIC).

    Science.gov (United States)

    Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena

    2011-01-01

    Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.

  19. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  20. Automatic phoneme category selectivity in the dorsal auditory stream.

    Science.gov (United States)

    Chevillet, Mark A; Jiang, Xiong; Rauschecker, Josef P; Riesenhuber, Maximilian

    2013-03-20

    Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.

  1. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses.

    Science.gov (United States)

    Molloy, Katharine; Griffiths, Timothy D; Chait, Maria; Lavie, Nilli

    2015-12-09

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory

  2. Hair cell regeneration in the avian auditory epithelium.

    Science.gov (United States)

    Stone, Jennifer S; Cotanche, Douglas A

    2007-01-01

    Regeneration of sensory hair cells in the mature avian inner ear was first described just over 20 years ago. Since then, it has been shown that many other non-mammalian species either continually produce new hair cells or regenerate them in response to trauma. However, mammals exhibit limited hair cell regeneration, particularly in the auditory epithelium. In birds and other non-mammals, regenerated hair cells arise from adjacent non-sensory (supporting) cells. Hair cell regeneration was initially described as a proliferative response whereby supporting cells re-enter the mitotic cycle, forming daughter cells that differentiate into either hair cells or supporting cells and thereby restore cytoarchitecture and function in the sensory epithelium. However, further analyses of the avian auditory epithelium (and amphibian vestibular epithelium) revealed a second regenerative mechanism, direct transdifferentiation, during which supporting cells change their gene expression and convert into hair cells without dividing. In the chicken auditory epithelium, these two distinct mechanisms show unique spatial and temporal patterns, suggesting they are differentially regulated. Current efforts are aimed at identifying signals that maintain supporting cells in a quiescent state or direct them to undergo direct transdifferentiation or cell division. Here, we review current knowledge about supporting cell properties and discuss candidate signaling molecules for regulating supporting cell behavior, in quiescence and after damage. While significant advances have been made in understanding regeneration in non-mammals over the last 20 years, we have yet to determine why the mammalian auditory epithelium lacks the ability to regenerate hair cells spontaneously and whether it is even capable of significant regeneration under additional circumstances. The continued study of mechanisms controlling regeneration in the avian auditory epithelium may lead to strategies for inducing

  3. Impaired theta phase-resetting underlying auditory N1 suppression in chronic alcoholism.

    Science.gov (United States)

    Fuentemilla, Lluis; Marco-Pallarés, Josep; Gual, Antoni; Escera, Carles; Polo, Maria Dolores; Grau, Carles

    2009-02-18

    It has been suggested that chronic alcoholism may lead to altered neural mechanisms related to inhibitory processes. Here, we studied auditory N1 suppression phenomena (i.e. amplitude reduction with repetitive stimuli) in chronic alcoholic patients as an early-stage information-processing brain function involving inhibition by the analysis of the N1 event-related potential and time-frequency computation (spectral power and phase-resetting). Our results showed enhanced neural theta oscillatory phase-resetting underlying N1 generation in suppressed N1 event-related potential. The present findings suggest that chronic alcoholism alters neural oscillatory synchrony dynamics at very early stages of information processing.

  4. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  6. Prototype to product—developing a commercially viable neural prosthesis

    Science.gov (United States)

    Seligman, Peter

    2009-12-01

    The Cochlear implant or 'Bionic ear' is a device that enables people who do not get sufficient benefit from a hearing aid to communicate with the hearing world. The Cochlear implant is not an amplifier, but a device that electrically stimulates the auditory nerve in a way that crudely mimics normal hearing, thus providing a hearing percept. Many recipients are able to understand running speech without the help of lipreading. Cochlear implants have reached a stage of maturity where there are now 170 000 recipients implanted worldwide. The commercial development of these devices has occurred over the last 30 years. This development has been multidisciplinary, including audiologists, engineers, both mechanical and electrical, histologists, materials scientists, physiologists, surgeons and speech pathologists. This paper will trace the development of the device we have today, from the engineering perspective. The special challenges of designing an active device that will work in the human body for a lifetime will be outlined. These challenges include biocompatibility, extreme reliability, safety, patient fitting and surgical issues. It is emphasized that the successful development of a neural prosthesis requires the partnership of academia and industry.

  7. Behavioral and EEG evidence for auditory memory suppression

    Directory of Open Access Journals (Sweden)

    Maya Elizabeth Cano

    2016-03-01

    Full Text Available The neural basis of motivated forgetting using the Think/No-Think (TNT paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG effects of auditory Think/No-Think in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response times during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: 1 a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500ms, and 2 a sustained Think positivity over parietal electrodes beginning at approximately 600ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition.The observed event-related potential pattern and theta power analysis are similar to that reported in visual Think/No-Think studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  8. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  9. Behavioral and EEG Evidence for Auditory Memory Suppression.

    Science.gov (United States)

    Cano, Maya E; Knight, Robert T

    2016-01-01

    The neural basis of motivated forgetting using the Think/No-Think (TNT) paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG) effects of auditory TNT in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response time (RTs) during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: (1) a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500 ms); and (2) a sustained Think positivity over parietal electrodes beginning at approximately 600 ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz) power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition. The observed event-related potential pattern and theta power analysis are similar to that reported in visual TNT studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  10. Three-dimensional sound localisation with a lizard peripheral auditory model

    DEFF Research Database (Denmark)

    Kjær Schmidt, Michael; Shaikh, Danish

    the networks learned a transfer function that translated the three-dimensional non-linear mapping into estimated azimuth and elevation values for the acoustic target. The neural network with two hidden layers as expected performed better than that with only one hidden layer. Our approach assumes that for any...... location of an acoustic target in three dimensions. Our approach utilises a model of the peripheral auditory system of lizards [Christensen-Dalsgaard and Manley 2005] coupled with a multi-layer perceptron neural network. The peripheral auditory model’s response to sound input encodes sound direction...... information in a single plane which by itself is insufficient to localise the acoustic target in three dimensions. A multi-layer perceptron neural network is used to combine two independent responses of the model, corresponding to two rotational movements, into an estimate of the sound direction in terms...

  11. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan

    2015-09-01

    Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.

  12. Category-specific responses to faces and objects in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Kari L Hoffman

    2008-03-01

    Full Text Available Auditory and visual signals often occur together, and the two sensory channels are known to infl uence each other to facilitate perception. The neural basis of this integration is not well understood, although other forms of multisensory infl uences have been shown to occur at surprisingly early stages of processing in cortex. Primary visual cortex neurons can show frequency-tuning to auditory stimuli, and auditory cortex responds selectively to certain somatosensory stimuli, supporting the possibility that complex visual signals may modulate early stages of auditory processing. To elucidate which auditory regions, if any, are responsive to complex visual stimuli, we recorded from auditory cortex and the superior temporal sulcus while presenting visual stimuli consisting of various objects, neutral faces, and facial expressions generated during vocalization. Both objects and conspecifi c faces elicited robust fi eld potential responses in auditory cortex sites, but the responses varied by category: both neutral and vocalizing faces had a highly consistent negative component (N100 followed by a broader positive component (P180 whereas object responses were more variable in time and shape, but could be discriminated consistently from the responses to faces. The face response did not vary within the face category, i.e., for expressive vs. neutral face stimuli. The presence of responses for both objects and neutral faces suggests that auditory cortex receives highly informative visual input that is not restricted to those stimuli associated with auditory components. These results reveal selectivity for complex visual stimuli in a brain region conventionally described as non-visual unisensory cortex.

  13. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Directory of Open Access Journals (Sweden)

    Jason A Miranda

    Full Text Available Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  14. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Science.gov (United States)

    Miranda, Jason A; Shepard, Kathryn N; McClintock, Shannon K; Liu, Robert C

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  15. Event-related potentials to visual, auditory, and bimodal (combined auditory-visual) stimuli.

    Science.gov (United States)

    Isoğlu-Alkaç, Ummühan; Kedzior, Karina; Keskindemirci, Gonca; Ermutlu, Numan; Karamursel, Sacit

    2007-02-01

    The purpose of this study was to investigate the response properties of event related potentials to unimodal and bimodal stimulations. The amplitudes of N1 and P2 were larger during bimodal evoked potentials (BEPs) than auditory evoked potentials (AEPs) in the anterior sites and the amplitudes of P1 were larger during BEPs than VEPs especially at the parieto-occipital locations. Responses to bimodal stimulation had longer latencies than responses to unimodal stimulation. The N1 and P2 components were larger in amplitude and longer in latency during the bimodal paradigm and predominantly occurred at the anterior sites. Therefore, the current bimodal paradigm can be used to investigate the involvement and location of specific neural generators that contribute to higher processing of sensory information. Moreover, this paradigm may be a useful tool to investigate the level of sensory dysfunctions in clinical samples.

  16. Long Maturity Forward Rates

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2001-01-01

    The paper aims to improve the knowledge of the empirical properties of the long maturity region of the forward rate curve. Firstly, the theoretical negative correlation between the slope at the long end of the forward rate curve and the term structure variance is recovered empirically and found...... to be statistically significant. Secondly, the expectations hypothesis is analyzed for the long maturity region of the forward rate curve using "forward rate" regressions. The expectations hypothesis is numerically close to being accepted but is statistically rejected. The findings provide mixed support...... for the affine term structure model....

  17. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  18. Method for Dissecting the Auditory Epithelium (Basilar Papilla) in Developing Chick Embryos.

    Science.gov (United States)

    Levic, Snezana; Yamoah, Ebenezer N

    2016-01-01

    Chickens are an invaluable model for exploring auditory physiology. Similar to humans, the chicken inner ear is morphologically and functionally close to maturity at the time of hatching. In contrast, chicks can regenerate hearing, an ability lost in all mammals, including humans. The extensive morphological, physiological, behavioral, and pharmacological data available, regarding normal development in the chicken auditory system, has driven the progress of the field. The basilar papilla is an attractive model system to study the developmental mechanisms of hearing. Here, we describe the dissection technique for isolating the basilar papilla in developing chick inner ear. We also provide detailed examples of physiological (patch clamping) experiments using this preparation.

  19. A neural network model of ventriloquism effect and aftereffect.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  20. A neural network model of ventriloquism effect and aftereffect.

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    Full Text Available Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli. By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  1. Grammar Maturity Model

    NARCIS (Netherlands)

    Zaytsev, V.; Pierantonio, A.; Schätz, B.; Tamzalit, D.

    2014-01-01

    The evolution of a software language (whether modelled by a grammar or a schema or a metamodel) is not limited to development of new versions and dialects. An important dimension of a software language evolution is maturing in the sense of improving the quality of its definition. In this paper, we

  2. Maturing interorganisational information systems

    NARCIS (Netherlands)

    Plomp, M.G.A.|info:eu-repo/dai/nl/313946809

    2012-01-01

    This thesis consists of nine chapters, divided over five parts. PART I is an introduction and the last part contains the conclusions. The remaining, intermediate parts are: PART II: Developing a maturity model for chain digitisation. This part contains two related studies concerning the development

  3. Jealousy and Moral Maturity.

    Science.gov (United States)

    Mathes, Eugene W.; Deuger, Donna J.

    Jealousy may be perceived as either good or bad depending upon the moral maturity of the individual. To investigate this conclusion, a study was conducted testing two hypothesis: a positive relationship exists between conventional moral reasoning (reference to norms and laws) and the endorsement and level of jealousy; and a negative relationship…

  4. An unusual mature thyroid teratoma on CT and {sup 99}Tcm scintigraphy imaging in a child

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yu-Zhen; Li, Wen-Hua; Li, Yu-Hua; Gao, Yu [Xin Hua Hospital, Shanghai Jiaotong University School of Medicine, Department of Radiology, Shanghai (China); Zhu, Ming-Jie [Xin Hua Hospital, Shanghai Jiaotong University School of Medicine, Department of Radiology, Shanghai (China); Xin Hua Hospital, Shanghai Jiaotong University School of Medicine, Department of Pathology, Shanghai (China)

    2010-11-15

    We report the imaging findings of a mature thyroid teratoma in a 5-year-old girl. Nuclear imaging showed a decrease in {sup 99}Tcm uptake in the right lobe of the thyroid gland. CT scan showed a slightly lobulated soft-tissue mass without calcification, fat or cystic components. Histological analysis showed that the tumor was composed of mature neural tissue, cartilaginous, and epithelial elements. This case study provides new insights into the CT appearance of mature thyroid teratomas. (orig.)

  5. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    Science.gov (United States)

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  6. The Neurophysiology of Auditory Hallucinations – A Historic and Contemporary Review

    Directory of Open Access Journals (Sweden)

    Remko evan Lutterveld

    2011-05-01

    Full Text Available Electroencephalography (EEG and magnetoencephalography (MEG are two techniques that distinguish themselves from other neuroimaging methodologies through their ability to directly measure brain-related activity and their high temporal resolution. A large body of research has applied these techniques to study auditory hallucinations. Across a variety of approaches, the left superior temporal cortex is consistently reported to be involved in this symptom. Moreover, there is increasing evidence that a failure in corollary discharge, i.e. a neural signal originating in frontal speech areas that indicates to sensory areas that forthcoming thought is self-generated, may underlie the experience of auditory hallucinations

  7. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  8. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  9. Assessment of auditory cortical function in cochlear implant patients using 15O PET

    International Nuclear Information System (INIS)

    Young, J.P.; O'Sullivan, B.T.; Gibson, W.P.; Sefton, A.E.; Mitchell, T.E.; Sanli, H.; Cervantes, R.; Withall, A.; Royal Prince Alfred Hospital, Sydney,

    1998-01-01

    Full text: Cochlear implantation has been an extraordinarily successful method of restoring hearing and the potential for full language development in pre-lingually and post-lingually deaf individuals (Gibson 1996). Post-lingually deaf patients, who develop their hearing loss later in life, respond best to cochlear implantation within the first few years of their deafness, but are less responsive to implantation after several years of deafness (Gibson 1996). In pre-lingually deaf children, cochlear implantation is most effect in allowing the full development language skills when performed within a critical period, in the first 8 years of life. These clinical observations suggest considerable neural plasticity of the human auditory cortex in acquiring and retaining language skills (Gibson 1996, Buchwald 1990). Currently, electrocochleography is used to determine the integrity of the auditory pathways to the auditory cortex. However, the functional integrity of the auditory cortex cannot be determined by this method. We have defined the extent of activation of the auditory cortex and auditory association cortex in 6 normal controls and 6 cochlear implant patients using 15 O PET functional brain imaging methods. Preliminary results have indicated the potential clinical utility of 15 O PET cortical mapping in the pre-surgical assessment and post-surgical follow up of cochlear implant patients. Copyright (1998) Australian Neuroscience Society

  10. Blocking estradiol synthesis affects memory for songs in auditory forebrain of male zebra finches.

    Science.gov (United States)

    Yoder, Kathleen M; Lu, Kai; Vicario, David S

    2012-11-14

    Estradiol (E2) has recently been shown to modulate sensory processing in an auditory area of the songbird forebrain, the caudomedial nidopallium (NCM). When a bird hears conspecific song, E2 increases locally in NCM, where neurons express both the aromatase enzyme that synthesizes E2 from precursors and estrogen receptors. Auditory responses in NCM show a form of neuronal memory: repeated playback of the unique learned vocalizations of conspecific individuals induces long-lasting stimulus-specific adaptation of neural responses to each vocalization. To test the role of E2 in this auditory memory, we treated adult male zebra finches (n=16) with either the aromatase inhibitor fadrozole (FAD) or saline for 8 days. We then exposed them to 'training' songs and, 6 h later, recorded multiunit auditory responses with an array of 16 microelectrodes in NCM. Adaptation rates (a measure of stimulus-specific adaptation) to playbacks of training and novel songs were computed, using established methods, to provide a measure of neuronal memory. Recordings from the FAD-treated birds showed a significantly reduced memory for the training songs compared with saline-treated controls, whereas auditory processing for novel songs did not differ between treatment groups. In addition, FAD did not change the response bias in favor of conspecific over heterospecific song stimuli. Our results show that E2 depletion affects the neuronal memory for vocalizations in songbird NCM, and suggest that E2 plays a necessary role in auditory processing and memory for communication signals.

  11. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  12. Neural Correlates of Threat Perception: Neural Equivalence of Conspecific and Heterospecific Mobbing Calls Is Learned

    Science.gov (United States)

    Avey, Marc T.; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B.

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise [1]–[2]. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators [3]. Mobbing calls produced in response to smaller, higher-threat predators contain more “D” notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators [4]. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned. PMID:21909363

  13. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Science.gov (United States)

    Avey, Marc T; Hoeschele, Marisa; Moscicki, Michele K; Bloomfield, Laurie L; Sturdy, Christopher B

    2011-01-01

    Songbird auditory areas (i.e., CMM and NCM) are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG) expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  14. Neural correlates of threat perception: neural equivalence of conspecific and heterospecific mobbing calls is learned.

    Directory of Open Access Journals (Sweden)

    Marc T Avey

    Full Text Available Songbird auditory areas (i.e., CMM and NCM are preferentially activated to playback of conspecific vocalizations relative to heterospecific and arbitrary noise. Here, we asked if the neural response to auditory stimulation is not simply preferential for conspecific vocalizations but also for the information conveyed by the vocalization. Black-capped chickadees use their chick-a-dee mobbing call to recruit conspecifics and other avian species to mob perched predators. Mobbing calls produced in response to smaller, higher-threat predators contain more "D" notes compared to those produced in response to larger, lower-threat predators and thus convey the degree of threat of predators. We specifically asked whether the neural response varies with the degree of threat conveyed by the mobbing calls of chickadees and whether the neural response is the same for actual predator calls that correspond to the degree of threat of the chickadee mobbing calls. Our results demonstrate that, as degree of threat increases in conspecific chickadee mobbing calls, there is a corresponding increase in immediate early gene (IEG expression in telencephalic auditory areas. We also demonstrate that as the degree of threat increases for the heterospecific predator, there is a corresponding increase in IEG expression in the auditory areas. Furthermore, there was no significant difference in the amount IEG expression between conspecific mobbing calls or heterospecific predator calls that were the same degree of threat. In a second experiment, using hand-reared chickadees without predator experience, we found more IEG expression in response to mobbing calls than corresponding predator calls, indicating that degree of threat is learned. Our results demonstrate that degree of threat corresponds to neural activity in the auditory areas and that threat can be conveyed by different species signals and that these signals must be learned.

  15. Auditory cues increase the hippocampal response to unimodal virtual reality.

    Science.gov (United States)

    Andreano, Joseph; Liang, Kevin; Kong, Lingjun; Hubbard, David; Wiederhold, Brenda K; Wiederhold, Mark D

    2009-06-01

    Previous research suggests that the effectiveness of virtual reality exposure therapy should increase as the experience becomes more immersive. However, the neural mechanisms underlying the experience of immersion are not yet well understood. To address this question, neural activity during exposure to two virtual worlds was measured by functional magnetic resonance imaging (fMRI). Two levels of immersion were used: unimodal (video only) and multimodal (video plus audio). The results indicated increased activity in both auditory and visual sensory cortices during multimodal presentation. Additionally, multimodal presentation elicited increased activity in the hippocampus, a region well known to be involved in learning and memory. The implications of this finding for exposure therapy are discussed.

  16. A Computational Model of the SC Multisensory Neurons: Integrative Capabilities, Maturation, and Plasticity

    Directory of Open Access Journals (Sweden)

    Cristiano Cuppini

    2011-10-01

    Full Text Available Different cortical and subcortical structures present neurons able to integrate stimuli of different sensory modalities. Among the others, one of the most investigated integrative regions is the Superior Colliculus (SC, a midbrain structure whose aim is to guide attentive behaviour and motor responses toward external events. Despite the large amount of experimental data in the literature, the neural mechanisms underlying the SC response are not completely understood. Moreover, recent data indicate that multisensory integration ability is the result of maturation after birth, depending on sensory experience. Mathematical models and computer simulations can be of value to investigate and clarify these phenomena. In the last few years, several models have been implemented to shed light on these mechanisms and to gain a deeper comprehension of the SC capabilities. Here, a neural network model (Cuppini et al., 2010 is extensively discussed. The model considers visual-auditory interaction, and is able to reproduce and explain the main physiological features of multisensory integration in SC neurons, and their acquisition during postnatal life. To reproduce a neonatal condition, the model assumes that during early life: 1 cortical-SC synapses are present but not active; 2 in this phase, responses are driven by non-cortical inputs with very large receptive fields (RFs and little spatial tuning; 3 a slight spatial preference for the visual inputs is present. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. As results, Cortical-SC synapses are crafted during this period thanks to the Hebbian rules of potentiation and depression, RFs are reduced in size, and neurons exhibit integrative capabilities to cross-modal stimuli, such as multisensory enhancement, inverse effectiveness, and multisensory depression. The utility of the modelling

  17. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  18. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  19. Biomimetic Sonar for Electrical Activation of the Auditory Pathway

    Directory of Open Access Journals (Sweden)

    D. Menniti

    2017-01-01

    Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.

  20. Cochlear injury and adaptive plasticity of the auditory cortex

    Directory of Open Access Journals (Sweden)

    ANNA R. eFETONI

    2015-02-01

    Full Text Available Growing evidence suggests that cochlear stressors as noise exposure and aging can induce homeostatic/maladaptive changes in the central auditory system from the brainstem to the cortex. Studies centered on such changes have revealed several mechanisms that operate in the context of sensory disruption after insult (noise trauma, drug- or age-related injury. The oxidative stress is central to current theories of induced sensory neural hearing loss and aging, and interventions to attenuate the hearing loss are based on antioxidant agent. The present review addresses the recent literature on the alterations in hair cells and spiral ganglion neurons due to noise-induced oxidative stress in the cochlea, as well on the impact of cochlear damage on the auditory cortex neurons. The emerging image emphasizes that noise-induced deafferentation and upward spread of cochlear damage is associated with the altered dendritic architecture of auditory pyramidal neurons. The cortical modifications may be reversed by treatment with antioxidants counteracting the cochlear redox imbalance. These findings open new therapeutic approaches to treat the functional consequences of the cortical reorganization following cochlear damage.

  1. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  2. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  3. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  4. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  5. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. No Need for Templates in the Auditory Enhancement Effect.

    Science.gov (United States)

    Carcagno, Samuele; Semal, Catherine; Demany, Laurent

    2013-01-01

    The audibility of a target tone in a multitone background masker is enhanced by the presentation of a precursor sound consisting of the masker alone. There is evidence that precursor-induced neural adaptation plays a role in this perceptual enhancement. However, the precursor may also be strategically used by listeners as a spectral template of the following masker to better segregate it from the target. In the present study, we tested this hypothesis by measuring the audibility of a target tone in a multitone masker after the presentation of precursors which, in some conditions, were made dissimilar to the masker by gating their components asynchronously. The precursor and the following sound were presented either to the same ear or to opposite ears. In either case, we found no significant difference in the amount of enhancement produced by synchronous and asynchronous precursors. In a second experiment, listeners had to judge whether a synchronous multitone complex contained exactly the same tones as a preceding precursor complex or had one tone less. In this experiment, listeners performed significantly better with synchronous than with asynchronous precursors, showing that asynchronous precursors were poorer perceptual templates of the synchronous multitone complexes. Overall, our findings indicate that precursor-induced auditory enhancement cannot be fully explained by the strategic use of the precursor as a template of the following masker. Our results are consistent with an explanation of enhancement based on selective neural adaptation taking place at a central locus of the auditory system.

  7. No Need for Templates in the Auditory Enhancement Effect.

    Directory of Open Access Journals (Sweden)

    Samuele Carcagno

    Full Text Available The audibility of a target tone in a multitone background masker is enhanced by the presentation of a precursor sound consisting of the masker alone. There is evidence that precursor-induced neural adaptation plays a role in this perceptual enhancement. However, the precursor may also be strategically used by listeners as a spectral template of the following masker to better segregate it from the target. In the present study, we tested this hypothesis by measuring the audibility of a target tone in a multitone masker after the presentation of precursors which, in some conditions, were made dissimilar to the masker by gating their components asynchronously. The precursor and the following sound were presented either to the same ear or to opposite ears. In either case, we found no significant difference in the amount of enhancement produced by synchronous and asynchronous precursors. In a second experiment, listeners had to judge whether a synchronous multitone complex contained exactly the same tones as a preceding precursor complex or had one tone less. In this experiment, listeners performed significantly better with synchronous than with asynchronous precursors, showing that asynchronous precursors were poorer perceptual templates of the synchronous multitone complexes. Overall, our findings indicate that precursor-induced auditory enhancement cannot be fully explained by the strategic use of the precursor as a template of the following masker. Our results are consistent with an explanation of enhancement based on selective neural adaptation taking place at a central locus of the auditory system.

  8. Do informal musical activities shape auditory skill development in preschool-age children?

    Science.gov (United States)

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-08-29

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.

  9. People Capability Maturity Model. SM.

    Science.gov (United States)

    1995-09-01

    tailored so it consumes less time and resources than a traditional software process assessment or CMU/SEI-95-MM-02 People Capability Maturity Model...improved reputation or customer loyalty. CMU/SEI-95-MM-02 People Capability Maturity Model ■ L5-17 Coaching Level 5: Optimizing Activity 1...Maturity Model CMU/SEI-95-MM-62 Carnegie-Mellon University Software Engineering Institute DTIC ELECTE OCT 2 7 1995 People Capability Maturity

  10. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Delayed Auditory Feedback and Movement

    Science.gov (United States)

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  12. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  14. Internal auditory canal (IAC) stenosis: imaging Findings

    International Nuclear Information System (INIS)

    Ortiz Jimenez, Johanna; Roa, Jose Luis; Figuero A, Ramon E

    2011-01-01

    Objectives: To describe the computed tomography (CT) and magnetic resonance (MR) findings in a patient with a diagnosis of internal auditory canal (IAC) stenosis. To describe the embryological development of the IAC structures and the natural history of IAC stenosis. Methods: A 4 year old girl presents with sensorineural hearing loss and bilateral recurrent otitis media. The temporal bone CT shows diminished left IAC diameter (less than 2 mm), right IAC absence and normal inner ear structures. These findings are pathognomonic for left IAC stenosis. The MR findings include left IAC stenosis and IAC neural structures absence secondary to aplasia of the vestibulocochlear nerve on each IAC . Results: Hypoplasia/aplasia of the vestibulocochlear nerve in association with IAC stenosis is an important consideration in the differential diagnosis of sensorineural hearing loss, as it is a relative contraindication for cochlear implant placement. Conclusions: IAC stenosis and vestibulocochlear nerve hypoplasia/aplasia must be excluded as an etiology of sensorineural hearing loss. The diagnosis can be made by CT and MR.

  15. Internal auditory canal (IAC) stenosis: Imaging findings

    International Nuclear Information System (INIS)

    Ortiz J, Johanna; Roa, Jose L; Figueroa Ramon E

    2011-01-01

    Objectives: To describe the computed tomography (CT) and magnetic resonance (MR) findings in a patient with a diagnosis of internal auditory canal (IAC) stenosis. To describe the embryological development of the IAC structures and the natural history of IAC stenosis. Methods: A 4 year old girl presents with sensorineural hearing loss and bilateral recurrent otitis media. The temporal bone CT shows diminished left IAC diameter (less than 2 mm), right IAC absence and normal inner ear structures. These findings are pathognomonic for left IAC stenosis. The MR findings include left IAC stenosis and IAC neural structures absence secondary to aplasia of the vestibulocochlear nerve on each IAC. Results: Hypoplasia/aplasia of the vestibulocochlear nerve in association with IAC stenosis is an important consideration in the differential diagnosis of sensorineural hearing loss, as it is a relative contraindication for cochlear implant placement. Conclusions: IAC stenosis and vestibulocochlear nerve hypoplasia/aplasia must be excluded as an etiology of sensorineural hearing loss. The diagnosis can be made by CT and MR.

  16. Maturity effects in energy futures

    Energy Technology Data Exchange (ETDEWEB)

    Serletis, Apostolos (Calgary Univ., AB (CA). Dept. of Economics)

    1992-04-01

    This paper examines the effects of maturity on future price volatility and trading volume for 129 energy futures contracts recently traded in the NYMEX. The results provide support for the maturity effect hypothesis, that is, energy futures prices to become more volatile and trading volume increases as futures contracts approach maturity. (author).

  17. Cortical evoked potentials to an auditory illusion: binaural beats.

    Science.gov (United States)

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi

    2009-08-01

    To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.

  18. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  19. Brain networks underlying mental imagery of auditory and visual information.

    Science.gov (United States)

    Zvyagintsev, Mikhail; Clemens, Benjamin; Chechko, Natalya; Mathiak, Krystyna A; Sack, Alexander T; Mathiak, Klaus

    2013-05-01

    Mental imagery is a complex cognitive process that resembles the experience of perceiving an object when this object is not physically present to the senses. It has been shown that, depending on the sensory nature of the object, mental imagery also involves correspondent sensory neural mechanisms. However, it remains unclear which areas of the brain subserve supramodal imagery processes that are independent of the object modality, and which brain areas are involved in modality-specific imagery processes. Here, we conducted a functional magnetic resonance imaging study to reveal supramodal and modality-specific networks of mental imagery for auditory and visual information. A common supramodal brain network independent of imagery modality, two separate modality-specific networks for imagery of auditory and visual information, and a common deactivation network were identified. The supramodal network included brain areas related to attention, memory retrieval, motor preparation and semantic processing, as well as areas considered to be part of the default-mode network and multisensory integration areas. The modality-specific networks comprised brain areas involved in processing of respective modality-specific sensory information. Interestingly, we found that imagery of auditory information led to a relative deactivation within the modality-specific areas for visual imagery, and vice versa. In addition, mental imagery of both auditory and visual information widely suppressed the activity of primary sensory and motor areas, for example deactivation network. These findings have important implications for understanding the mechanisms that are involved in generation of mental imagery. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Sparse representation of sounds in the unanesthetized auditory cortex.

    Directory of Open Access Journals (Sweden)

    Tomás Hromádka

    2008-01-01

    Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.

  1. Time course of dynamic range adaptation in the auditory nerve

    Science.gov (United States)

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  2. Neural correlates of successful semantic processing during propofol sedation

    NARCIS (Netherlands)

    Adapa, Ram M.; Davis, Matthew H.; Stamatakis, Emmanuel A.; Absalom, Anthony R.; Menon, David K.

    Sedation has a graded effect on brain responses to auditory stimuli: perceptual processing persists at sedation levels that attenuate more complex processing. We used fMRI in healthy volunteers sedated with propofol to assess changes in neural responses to spoken stimuli. Volunteers were scanned

  3. Predictive Acoustic Tracking with an Adaptive Neural Mechanism

    DEFF Research Database (Denmark)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    model of the lizard peripheral auditory system to extract information regarding sound direction. This information is utilised by a neural machinery to learn the acoustic signal’s velocity through fast and unsupervised correlation-based learning adapted from differential Hebbian learning. This approach...

  4. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  5. Prenatal Nicotine Exposure Disrupts Infant Neural Markers of Orienting.

    Science.gov (United States)

    King, Erin; Campbell, Alana; Belger, Aysenil; Grewen, Karen

    2017-08-17

    Prenatal nicotine exposure (PNE) from maternal cigarette-smoking is linked to developmental deficits, including impaired auditory processing, language, generalized intelligence, attention and sleep. Fetal brain undergoes massive growth, organization and connectivity during gestation, making it particularly vulnerable to neurotoxic insult. Nicotine binds to nicotinic acetylcholine receptors, which are extensively involved in growth, connectivity and function of developing neural circuitry and neurotransmitter systems. Thus, PNE may have long-term impact on neurobehavioral development. The purpose of this study was to compare the auditory K-complex, an event-related potential reflective of auditory gating, sleep preservation and memory consolidation during sleep, in infants with and without PNE and to relate these neural correlates to neurobehavioral development. We compared brain responses to an auditory paired-click paradigm in 3 to 5-month-old infants during Stage 2 sleep, when the K-complex is best observed. We measured component amplitude and delta activity during the K-complex. PNE may impair auditory sensory gating, which may contribute to disrupted sleep and to reduced auditory discrimination and learning, attention re-orienting and/or arousal during wakefulness reported in other studies. Links between PNE and reduced K-complex amplitude and delta power may represent altered cholinergic and GABAergic synaptic programming, and possibly reflect early neural bases for PNE-linked disruptions in sleep quality and auditory processing. These may pose significant disadvantage for language acquisition, attention, and social interaction necessary for academic and social success. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    Science.gov (United States)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  7. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  8. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  9. Antibody affinity maturation

    DEFF Research Database (Denmark)

    Skjødt, Mette Louise

    Yeast surface display is an effective tool for antibody affinity maturation because yeast can be used as an all-in-one workhorse to assemble, display and screen diversified antibody libraries. By employing the natural ability of yeast Saccharomyces cerevisiae to efficiently recombine multiple DNA...... laboratory conditions. A particular emphasis was put on using molecular techniques in conjunction with microenvironmental measurements (O2, pH, irradiance), a combination that is rarely found but provides a much more detailed understanding of “cause and effect” in complex natural systems...

  10. Differences in neurogenesis differentiate between core and shell regions of auditory nuclei in the turtle (Pelodiscus sinensis): evolutionary implications.

    Science.gov (United States)

    Zeng, Shao-Ju; Xi, Chao; Zhang, Xin-Wen; Zuo, Ming-Xue

    2007-01-01

    There is a clear core-versus-shell distinction in cytoarchitecture, electrophysiological properties and neural connections in the mesencephalic and diencephalic auditory nuclei of amniotes. Determining whether the embryogenesis of auditory nuclei shows a similar organization is helpful for further understanding the constituent organization and evolution of auditory nuclei. Therefore in the present study, we injected [(3)H]-thymidine into turtle embryos (Pelodiscus sinensis) at various stages of development. Upon hatching, [(3)H]-thymidine labeling was examined in both the core and shell auditory regions in the midbrain, diencephalon and dorsal ventricular ridge. Met-enkephalin and substance P immunohistochemistry was used to distinguish the core and shell regions. In the mesencephalic auditory nucleus, the occurrence of heavily labeled neurons in the nucleus centralis of the torus semicircularis reached its peak at embryonic day 9, one day later than the surrounding shell. In the diencephalic auditory nucleus, the production of heavily labeled neurons in the central region of the reuniens (Re) was highest at embryonic day (E) 8, one day later than that in the shell region of reuniens. In the region of the dorsal ventricular ridge that received inputs from the central region of Re, the appearance of heavily labeled neurons also reached a peak one day later than that in the area receiving inputs from the shell region of reuniens. Thus, there is a core-versus-shell organization of neuronal generation in reptilian auditory areas. Copyright (c) 2007 S. Karger AG, Basel.

  11. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  12. Comparison of perceptual properties of auditory streaming between spectral and amplitude modulation domains.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2017-07-01

    The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔF TONE , TONE condition) but also in the amplitude modulation rate ("AM cue": ΔF AM , AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔF AM and ΔF TONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Segregation and integration of auditory streams when listening to multi-part music.

    Science.gov (United States)

    Ragert, Marie; Fairhurst, Merle T; Keller, Peter E

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams

  14. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  15. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Alva Engell

    Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  16. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Science.gov (United States)

    Engell, Alva; Junghöfer, Markus; Stein, Alwina; Lau, Pia; Wunderlich, Robert; Wollbrink, Andreas; Pantev, Christo

    2016-01-01

    Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  17. Non-Monotonic Relation Between Noise Exposure Severity and Neuronal Hyperactivity in the Auditory Midbrain

    Directory of Open Access Journals (Sweden)

    Lara Li Hesse

    2016-08-01

    Full Text Available The occurrence of tinnitus can be linked to hearing loss in the majority of cases, but there is nevertheless a large degree of unexplained heterogeneity in the relation between hearing loss and tinnitus. Part of the problem might be that hearing loss is usually quantified in terms of increased hearing thresholds, which only provides limited information about the underlying cochlear damage. Moreover, noise exposure that does not cause hearing threshold loss can still lead to hidden hearing loss (HHL, i.e. functional deafferentation of auditory nerve fibres (ANFs through loss of synaptic ribbons in inner hair cells. Whilst it is known that increased hearing thresholds can trigger increases in spontaneous neural activity in the central auditory system, i.e. a putative neural correlate of tinnitus, the central effects of HHL have not yet been investigated. Here, we exposed mice to octave-band noise at 100 and 105 dB SPL, to generate HHL and permanent increases of hearing thresholds, respectively. Deafferentation of ANFs was confirmed through measurement of auditory brainstem responses and cochlear immunohistochemistry. Acute extracellular recordings from the auditory midbrain (inferior colliculus demonstrated increases in spontaneous neuronal activity (a putative neural correlate of tinnitus in both groups. Surprisingly the increase in spontaneous activity was most pronounced in the mice with HHL, suggesting that the relation between hearing loss and neuronal hyperactivity might be more complex than currently understood. Our computational model indicated that these differences in neuronal hyperactivity could arise from different degrees of deafferentation of low-threshold ANFs in the two exposure groups.Our results demonstrate that HHL is sufficient to induce changes in central auditory processing, and they also indicate a non-monotonic relationship between cochlear damage and neuronal hyperactivity, suggesting an explanation for why tinnitus might

  18. Prenatal exposure to multiple pesticides is associated with auditory brainstem response at 9months in a cohort study of Chinese infants.

    Science.gov (United States)

    Sturza, Julie; Silver, Monica K; Xu, Lin; Li, Mingyan; Mai, Xiaoqin; Xia, Yankai; Shao, Jie; Lozoff, Betsy; Meeker, John

    2016-01-01

    Pesticides are associated with poorer neurodevelopmental outcomes, but little is known about the effects on sensory functioning. Auditory brainstem response (ABR) and pesticide data were available for 27 healthy, full-term 9-month-old infants participating in a larger study of early iron deficiency and neurodevelopment. Cord blood was analyzed by gas chromatography-mass spectrometry for levels of 20 common pesticides. The ABR forward-masking condition consisted of a click stimulus (masker) delivered via ear canal transducers followed by an identical stimulus delayed by 8, 16, or 64 milliseconds (ms). ABR peak latencies were evaluated as a function of masker-stimulus time interval. Shorter wave latencies reflect faster neural conduction, more mature auditory pathways, and greater degree of myelination. Linear regression models were used to evaluate associations between total number of pesticides detected and ABR outcomes. We considered an additive or synergistic effect of poor iron status by stratifying our analysis by newborn ferritin (based on median split). Infants in the sample were highly exposed to pesticides; a mean of 4.1 pesticides were detected (range 0-9). ABR Wave V latency and central conduction time (CCT) were associated with the number of pesticides detected in cord blood for the 64ms and non-masker conditions. A similar pattern seen for CCT from the 8ms and 16ms conditions, although statistical significance was not reached. Increased pesticide exposure was associated with longer latency. The relation between number of pesticides detected in cord blood and CCT depended on the infant's cord blood ferritin level. Specifically, the relation was present in the lower cord blood ferritin group but not the higher cord blood ferritin group. ABR processing was slower in infants with greater prenatal pesticide exposure, indicating impaired neuromaturation. Infants with lower cord blood ferritin appeared to be more sensitive to the effects of prenatal pesticide

  19. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  20. Auditory neuropathy/auditory dyssynchrony in children with cochlear implants Neuropatia auditiva/dessincronia auditiva em crianças usuárias de implante coclear

    Directory of Open Access Journals (Sweden)

    Ana Claudia Martinho de Carvalho

    2011-08-01

    Full Text Available The electrical stimulation generated by the Cochlear Implant (CI may improve the neural synchrony and hence contribute to the development of auditory skills in patients with Auditory Neuropathy/Auditory Dyssynchrony (AN/AD. AIM: Prospective cohort cross-sectional study to evaluate the auditory performance and the characteristics of the electrically evoked compound action potential (ECAP in 18 children with AN/AD and cochlear implants. MATERIAL AND METHODS: The auditory perception was evaluated by sound field thresholds and speech perception tests. To evaluate ECAP's characteristics, the threshold and amplitude of neural response were evaluated at 80Hz and 35Hz. RESULTS: No significant statistical difference was found concerning the development of auditory skills. The ECAP's characteristics differences at 80 and 35Hz stimulation rate were also not statistically significant. CONCLUSIONS: The CI was seen as an efficient resource to develop auditory skills in 94% of the AN/AD patients studied. The auditory perception benefits and the possibility to measure ECAP showed that the electrical stimulation could compensate for the neural dyssynchrony caused by the AN/AD. However, a unique clinical procedure cannot be proposed at this point. Therefore, a careful and complete evaluation of each AN/AD patient before recommending a Cochlear Implant is advised. Clinical Trials: NCT01023932A estimulação elétrica gerada pelo Implante Coclear (IC pode ser capaz de melhorar a sincronia neural e contribuir para o desenvolvimento das habilidades auditivas de sujeitos portadores de Neuropatia Auditiva/Dessincronia Auditiva (NA/DA. OBJETIVO: Estudo de coorte prospectivo transversal para avaliar o desempenho auditivo e as características do Potencial de Ação Composto Eletricamente Evocado no Nervo Auditivo (ECAP em 18 crianças portadoras de NA/DA e usuárias de IC. MATERIAL E MÉTODOS: Percepção auditiva e características do ECAP foram avaliadas

  1. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    Science.gov (United States)

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Science.gov (United States)

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  3. Neuroscience illuminating the influence of auditory or phonological intervention on language-related deficits

    Directory of Open Access Journals (Sweden)

    Sari eYlinen

    2015-02-01

    Full Text Available Remediation programs for language-related learning deficits are urgently needed to enable equal opportunities in education. To meet this need, different training and intervention programs have been developed. Here we review, from an educational perspective, studies that have explored the neural basis of behavioral changes induced by auditory or phonological training in dyslexia, specific language impairment (SLI, and language-learning impairment (LLI. Training has been shown to induce plastic changes in deficient neural networks. In dyslexia, these include, most consistently, increased or normalized activation of previously hypoactive inferior frontal and occipito-temporal areas. In SLI and LLI, studies have shown the strengthening of previously weak auditory brain responses as a result of training. The combination of behavioral and brain measures of remedial gains has potential to increase the understanding of the causes of language-related deficits, which may help to target remedial interventions more accurately to the core problem.

  4. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  5. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  6. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  7. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  8. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  9. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  10. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    Science.gov (United States)

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the

  11. An auditory-neuroscience perspective on the development of selective mutism

    Directory of Open Access Journals (Sweden)

    Yael Henkin

    2015-04-01

    Full Text Available Selective mutism (SM is a relatively rare psychiatric disorder of childhood characterized by consistent inability to speak in specific social situations despite the ability to speak normally in others. SM typically involves severe impairments in social and academic functioning. Common complications include school failure, social difficulties in the peer group, and aggravated intra-familial relationships. Although SM has been described in the medical and psychological literatures for many years, the potential underlying neural basis of the disorder has only recently been explored. Here we explore the potential role of specific auditory neural mechanisms in the psychopathology of SM and discuss possible implications for treatment.

  12. Differentiation state determines neural effects on microvascular endothelial cells

    International Nuclear Information System (INIS)

    Muffley, Lara A.; Pan, Shin-Chen; Smith, Andria N.; Ga, Maricar; Hocking, Anne M.; Gibran, Nicole S.

    2012-01-01

    Growing evidence indicates that nerves and capillaries interact paracrinely in uninjured skin and cutaneous wounds. Although mature neurons are the predominant neural cell in the skin, neural progenitor cells have also been detected in uninjured adult skin. The aim of this study was to characterize differential paracrine effects of neural progenitor cells and mature sensory neurons on dermal microvascular endothelial cells. Our results suggest that neural progenitor cells and mature sensory neurons have unique secretory profiles and distinct effects on dermal microvascular endothelial cell proliferation, migration, and nitric oxide production. Neural progenitor cells and dorsal root ganglion neurons secrete different proteins related to angiogenesis. Specific to neural progenitor cells were dipeptidyl peptidase-4, IGFBP-2, pentraxin-3, serpin f1, TIMP-1, TIMP-4 and VEGF. In contrast, endostatin, FGF-1, MCP-1 and thrombospondin-2 were specific to dorsal root ganglion neurons. Microvascular endothelial cell proliferation was inhibited by dorsal root ganglion neurons but unaffected by neural progenitor cells. In contrast, microvascular endothelial cell migration in a scratch wound assay was inhibited by neural progenitor cells and unaffected by dorsal root ganglion neurons. In addition, nitric oxide production by microvascular endothelial cells was increased by dorsal root ganglion neurons but unaffected by neural progenitor cells. -- Highlights: ► Dorsal root ganglion neurons, not neural progenitor cells, regulate microvascular endothelial cell proliferation. ► Neural progenitor cells, not dorsal root ganglion neurons, regulate microvascular endothelial cell migration. ► Neural progenitor cells and dorsal root ganglion neurons do not effect microvascular endothelial tube formation. ► Dorsal root ganglion neurons, not neural progenitor cells, regulate microvascular endothelial cell production of nitric oxide. ► Neural progenitor cells and dorsal root

  13. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  14. Correlation between dental maturity and cervical vertebral maturity.

    Science.gov (United States)

    Chen, Jianwei; Hu, Haikun; Guo, Jing; Liu, Zeping; Liu, Renkai; Li, Fan; Zou, Shujuan

    2010-12-01

    The aim of this study was to investigate the association between dental and skeletal maturity. Digital panoramic radiographs and lateral skull cephalograms of 302 patients (134 boys and 168 girls, ranging from 8 to 16 years of age) were examined. Dental maturity was assessed by calcification stages of the mandibular canines, first and second premolars, and second molars, whereas skeletal maturity was estimated by the cervical vertebral maturation (CVM) stages. The Spearman rank-order correlation coefficient was used to measure the association between CVM stage and dental calcification stage of individual teeth. The mean chronologic age of girls was significantly lower than that of boys in each CVM stage. The Spearman rank-order correlation coefficients between dental maturity and cervical vertebral maturity ranged from 0.391 to 0.582 for girls and from 0.464 to 0.496 for boys (P cervical vertebral maturation stage. The development of the mandibular second molar in females and that of the mandibular canine in males had the strongest correlations with cervical vertebral maturity. Therefore, it is practical to consider the relationship between dental and skeletal maturity when planning orthodontic treatment. Copyright © 2010 Mosby, Inc. All rights reserved.

  15. Self-initiated actions result in suppressed auditory but amplified visual evoked components in healthy participants.

    Science.gov (United States)

    Mifsud, Nathan G; Oestreich, Lena K L; Jack, Bradley N; Ford, Judith M; Roach, Brian J; Mathalon, Daniel H; Whitford, Thomas J

    2016-05-01

    Self-suppression refers to the phenomenon that sensations initiated by our own movements are typically less salient, and elicit an attenuated neural response, compared to sensations resulting from changes in the external world. Evidence for self-suppression is provided by previous ERP studies in the auditory modality, which have found that healthy participants typically exhibit a reduced auditory N1 component when auditory stimuli are self-initiated as opposed to externally initiated. However, the literature investigating self-suppression in the visual modality is sparse, with mixed findings and experimental protocols. An EEG study was conducted to expand our understanding of self-suppression across different sensory modalities. Healthy participants experienced either an auditory (tone) or visual (pattern-reversal) stimulus following a willed button press (self-initiated), a random interval (externally initiated, unpredictable onset), or a visual countdown (externally initiated, predictable onset-to match the intrinsic predictability of self-initiated stimuli), while EEG was continuously recorded. Reduced N1 amplitudes for self- versus externally initiated tones indicated that self-suppression occurred in the auditory domain. In contrast, the visual N145 component was amplified for self- versus externally initiated pattern reversals. Externally initiated conditions did not differ as a function of their predictability. These findings highlight a difference in sensory processing of self-initiated stimuli across modalities, and may have implications for clinical disorders that are ostensibly associated with abnormal self-suppression. © 2016 Society for Psychophysiological Research.

  16. Large-scale synchronized activity during vocal deviance detection in the zebra finch auditory forebrain.

    Science.gov (United States)

    Beckers, Gabriël J L; Gahr, Manfred

    2012-08-01

    Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.

  17. Auditory event-related potentials associated with perceptual reversals of bistable pitch motion.

    Science.gov (United States)

    Davidson, Gray D; Pitts, Michael A

    2014-01-01

    Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.

  18. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  19. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling

    Directory of Open Access Journals (Sweden)

    Mengxue eCao

    2014-03-01

    Full Text Available Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic--semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners; a reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1 I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2 clear auditory and semantic boundaries can be found in the network representation; (3 cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4 reinforcing-by-link training leads to well-perceived auditory--semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.

  20. Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2016-02-01

    Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  1. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.

    Science.gov (United States)

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  2. Functional MR imaging of cerebral auditory cortex with linguistic and non-linguistic stimulation: preliminary study

    International Nuclear Information System (INIS)

    Kang, Su Jin; Kim, Jae Hyoung; Shin, Tae Min

    1999-01-01

    To obtain preliminary data for understanding the central auditory neural pathway by means of functional MR imaging (fMRI) of the cerebral auditory cortex during linguistic and non-linguistic auditory stimulation. In three right-handed volunteers we conducted fMRI of auditory cortex stimulation at 1.5 T using a conventional gradient-echo technique (TR/TE/flip angle: 80/60/40 deg). Using a pulsed tone of 1000 Hz and speech as non-linguistic and linguistic auditory stimuli, respectively, images-including those of the superior temporal gyrus of both hemispheres-were obtained in sagittal plases. Both stimuli were separately delivered binaurally or monoaurally through a plastic earphone. Images were activated by processing with homemade software. In order to analyze patterns of auditory cortex activation according to type of stimulus and which side of the ear was stimulated, the number and extent of activated pixels were compared between both temporal lobes. Biaural stimulation led to bilateral activation of the superior temporal gyrus, while monoaural stimulation led to more activation in the contralateral temporal lobe than in the ipsilateral. A trend toward slight activation of the left (dominant) temporal lobe in ipsilateral stimulation, particularly with a linguistic stimulus, was observed. During both biaural and monoaural stimulation, a linguistic stimulus produced more widespread activation than did a non-linguistic one. The superior temporal gyri of both temporal lobes are associated with acoustic-phonetic analysis, and the left (dominant) superior temporal gyrus is likely to play a dominant role in this processing. For better understanding of physiological and pathological central auditory pathways, further investigation is needed

  3. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  4. Mature Cystic Renal Teratoma

    International Nuclear Information System (INIS)

    Yavuz, Alpaslan; Ceken, Kagan; Alimoglu, Emel; Akkaya, Bahar

    2014-01-01

    Teratomas are rare germline tumors that originate from one or more embryonic germ cell layers. Teratoma of the kidney is extremely rare, and less than 30 cases of primary intrarenal teratomas have been published to date. We report the main radiologic features of an unusual case of mature cystic teratoma arising from the left kidney in a two-year-old boy. A left-sided abdominal mass was detected on physical examination and B-Mod Ultrasound (US) examination revealed a heterogeneous mass with central cystic component. Computed tomography (CT) demonstrated a lobulated, heterogeneous, hypodense mass extending craniocaudally from the splenic hilum to the level of the left iliac fossa. Nephrectomy was performed and a large, fatty mass arising from the left kidney was excised. The final pathologic diagnosis was confirmed as cystic renal teratoma

  5. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  6. Random Gap Detection Test (RGDT) performance of individuals with central auditory processing disorders from 5 to 25 years of age.

    Science.gov (United States)

    Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo

    2012-02-01

    The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because

  7. Developing maturity grids for assessing organisational capabilities

    DEFF Research Database (Denmark)

    Maier, Anja; Moultrie, James; Clarkson, P John

    2009-01-01

    Keyword: Maturity Model,Maturity Grid,Maturity Matrix,Organisational Capabilities,Benchmarking,New Product Development,Perfirmance Assessment......Keyword: Maturity Model,Maturity Grid,Maturity Matrix,Organisational Capabilities,Benchmarking,New Product Development,Perfirmance Assessment...

  8. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  9. Modeling non-maturing liabilities

    OpenAIRE

    von Feilitzen, Helena

    2011-01-01

    Non‐maturing liabilities, such as savings accounts, lack both predetermined maturity and reset dates due to the fact that the depositor is free to withdraw funds at any time and that the depository institution is free to change the rate. These attributes complicate the risk management of such products and no standardized solution exists. The problem is important however since non‐maturing liabilities typically make up a considerable part of the funding of a bank. In this report different mode...

  10. Developmental profiles of the intrinsic properties and synaptic function of auditory neurons in preterm and term baboon neonates.

    Science.gov (United States)

    Kim, Sei Eun; Lee, Seul Yi; Blanco, Cynthia L; Kim, Jun Hee

    2014-08-20

    The human fetus starts to hear and undergoes major developmental changes in the auditory system during the third trimester of pregnancy. Although there are significant data regarding development of the auditory system in rodents, changes in intrinsic properties and synaptic function of auditory neurons in developing primate brain at hearing onset are poorly understood. We performed whole-cell patch-clamp recordings of principal neurons in the medial nucleus of trapezoid body (MNTB) in preterm and term baboon brainstem slices to study the structural and functional maturation of auditory synapses. Each MNTB principal neuron received an excitatory input from a single calyx of Held terminal, and this one-to-one pattern of innervation was already formed in preterm baboons delivered at 67% of normal gestation. There was no difference in frequency or amplitude of spontaneous excitatory postsynaptic synaptic currents between preterm and term MNTB neurons. In contrast, the frequency of spontaneous GABA(A)/glycine receptor-mediated inhibitory postsynaptic synaptic currents, which were prevalent in preterm MNTB neurons, was significantly reduced in term MNTB neurons. Preterm MNTB neurons had a higher input resistance than term neurons and fired in bursts, whereas term MNTB neurons fired a single action potential in response to suprathreshold current injection. The maturation of intrinsic properties and dominance of excitatory inputs in the primate MNTB allow it to take on its mature role as a fast and reliable relay synapse. Copyright © 2014 the authors 0270-6474/14/3411399-06$15.00/0.

  11. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  12. Neurovascular Saturation Thresholds Under High Intensity Auditory Stimulation During Wake

    Science.gov (United States)

    Schei, Jennifer L.; Van Nortwick, Amy S.; Meighan, Peter C.; Rector, David M.

    2012-01-01

    Coupling between neural activity and hemodynamic responses is important in understanding brain function, interpreting brain imaging signals, and assessing pathological conditions. Tissue state is a major factor in neurovascular coupling and may alter the relationship between neural and hemodynamic activity. However, most neurovascular coupling studies are performed under anesthetized or sedated states which may have severe consequences on coupling mechanisms. Our previous studies showed that following prolonged periods of sleep deprivation, evoked hemodynamic responses were muted despite consistent electrical responses, suggesting that sustained neural activity may decrease vascular compliance and limit blood perfusion. To investigate potential perfusion limitations during natural waking conditions, we simultaneously measured evoked response potentials (ERPs) and evoked hemodynamic responses using optical imaging techniques to increasing intensity auditory stimulation. The relationship between evoked hemodynamic responses and integrated ERPs followed a sigmoid relationship where the hemodynamic response approached saturation at lower stimulus intensities than the ERP. If limits in blood perfusion are caused by stretching of the vessel wall, then these results suggest there may be decreased vascular compliance due to sustained neural activity during wake, which could limit vascular responsiveness and local blood perfusion. Conditions that stress cerebral vasculature, such as sleep deprivation and some pathologies (e.g., epilepsy), may further decrease vascular compliance, limit metabolic delivery, and cause tissue trauma. While ERPs and evoked hemodynamic responses provide an indication of the correlated neural activity and metabolic demand, the relationship between these two responses is complex and the different measurement techniques are not directly correlated. Future studies are required to verify these findings and further explore neurovascular coupling during

  13. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    Science.gov (United States)

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear

  14. Evolvable synthetic neural system

    Science.gov (United States)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  15. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  16. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  17. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  19. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Hierarchical differences in population coding within auditory cortex.

    Science.gov (United States)

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2017-08-01

    Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their

  1. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    Science.gov (United States)

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  2. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  3. Encoding of Sucrose's Palatability in the Nucleus Accumbens Shell and Its Modulation by Exteroceptive Auditory Cues

    Directory of Open Access Journals (Sweden)

    Miguel Villavicencio

    2018-05-01

    Full Text Available Although the palatability of sucrose is the primary reason for why it is over consumed, it is not well understood how it is encoded in the nucleus accumbens shell (NAcSh, a brain region involved in reward, feeding, and sensory/motor transformations. Similarly, untouched are issues regarding how an external auditory stimulus affects sucrose palatability and, in the NAcSh, the neuronal correlates of this behavior. To address these questions in behaving rats, we investigated how food-related auditory cues modulate sucrose's palatability. The goals are to determine whether NAcSh neuronal responses would track sucrose's palatability (as measured by the increase in hedonically positive oromotor responses lick rate, sucrose concentration, and how it processes auditory information. Using brief-access tests, we found that sucrose's palatability was enhanced by exteroceptive auditory cues that signal the start and the end of a reward epoch. With only the start cue the rejection of water was accelerated, and the sucrose/water ratio was enhanced, indicating greater palatability. However, the start cue also fragmented licking patterns and decreased caloric intake. In the presence of both start and stop cues, the animals fed continuously and increased their caloric intake. Analysis of the licking microstructure confirmed that auditory cues (either signaling the start alone or start/stop enhanced sucrose's oromotor-palatability responses. Recordings of extracellular single-unit activity identified several distinct populations of NAcSh responses that tracked either the sucrose palatability responses or the sucrose concentrations by increasing or decreasing their activity. Another neural population fired synchronously with licking and exhibited an enhancement in their coherence with increasing sucrose concentrations. The population of NAcSh's Palatability-related and Lick-Inactive neurons were the most important for decoding sucrose's palatability. Only the Lick

  4. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    Science.gov (United States)

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy

  5. Whose Maturity is it Anyway?

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Mukkamala, Raghava Rao

    2017-01-01

    This paper presents results from an ongoing empirical study that seeks to understand the influence of different quantitative methods on the design and assessment of maturity models. Although there have been many academic publications on maturity models, there exists a significant lack of understa...

  6. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  7. Development of Brainstem-Evoked Responses in Congenital Auditory Deprivation

    Directory of Open Access Journals (Sweden)

    J. Tillein

    2012-01-01

    Full Text Available To compare the development of the auditory system in hearing and completely acoustically deprived animals, naive congenitally deaf white cats (CDCs and hearing controls (HCs were investigated at different developmental stages from birth till adulthood. The CDCs had no hearing experience before the acute experiment. In both groups of animals, responses to cochlear implant stimulation were acutely assessed. Electrically evoked auditory brainstem responses (E-ABRs were recorded with monopolar stimulation at different current levels. CDCs demonstrated extensive development of E-ABRs, from first signs of responses at postnatal (p.n. day 3 through appearance of all waves of brainstem response at day 8 p.n. to mature responses around day 90 p.n.. Wave I of E-ABRs could not be distinguished from the artifact in majority of CDCs, whereas in HCs, it was clearly separated from the stimulus artifact. Waves II, III, and IV demonstrated higher thresholds in CDCs, whereas this difference was not found for wave V. Amplitudes of wave III were significantly higher in HCs, whereas wave V amplitudes were significantly higher in CDCs. No differences in latencies were observed between the animal groups. These data demonstrate significant postnatal subcortical development in absence of hearing, and also divergent effects of deafness on early waves II–IV and wave V of the E-ABR.

  8. Three-dimensional Acoustic Localisation via Directed Movements of a Two-dimensional Model of the Lizard Peripheral Auditory System

    DEFF Research Database (Denmark)

    Shaikh, Danish; Kjær Schmidt, Michael

    2017-01-01

    of the acoustic target with respect to one plane of rotation. A multi-layer perceptron neural network is trained via supervised learning to translate the combination of the two measurements into an estimate of the relative location of the acoustic target in terms of its azimuth and elevation. The acoustic...... localisation performance of the system is evaluated in simulation for noiseless as well as noisy sinusoidal auditory signals with a 20 dB signal-to-noise ratio for four different sound frequencies of 1450 Hz, 1650 Hz, 1850 Hz and 2050 Hz that span the response frequency range of the peripheral auditory model...

  9. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  10. Adipose-derived stromal cells enhance auditory neuron survival in an animal model of sensory hearing loss.

    Science.gov (United States)

    Schendzielorz, Philipp; Vollmer, Maike; Rak, Kristen; Wiegner, Armin; Nada, Nashwa; Radeloff, Katrin; Hagen, Rudolf; Radeloff, Andreas

    2017-10-01

    A cochlear implant (CI) is an electronic prosthesis that can partially restore speech perception capabilities. Optimum information transfer from the cochlea to the central auditory system requires a proper functioning auditory nerve (AN) that is electrically stimulated by the device. In deafness, the lack of neurotrophic support, normally provided by the sensory cells of the inner ear, however, leads to gradual degeneration of auditory neurons with undesirable consequences for CI performance. We evaluated the potential of adipose-derived stromal cells (ASCs) that are known to produce neurotrophic factors to prevent neural degeneration in sensory hearing loss. For this, co-cultures of ASCs with auditory neurons have been studied, and autologous ASC transplantation has been performed in a guinea pig model of gentamicin-induced sensory hearing loss. In vitro ASCs were neuroprotective and considerably increased the neuritogenesis of auditory neurons. In vivo transplantation of ASCs into the scala tympani resulted in an enhanced survival of auditory neurons. Specifically, peripheral AN processes that are assumed to be the optimal activation site for CI stimulation and that are particularly vulnerable to hair cell loss showed a significantly higher survival rate in ASC-treated ears. ASC transplantation into the inner ear may restore neurotrophic support in sensory hearing loss and may help to improve CI performance by enhanced AN survival. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  11. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  12. Therapeutic potential of stem cells in auditory hair cell repair

    Directory of Open Access Journals (Sweden)

    Ryuji Hata

    2009-01-01

    Full Text Available The prevalence of acquired hearing loss is very high. About 10% of the total population and more than one third of the population over 65 years suffer from debilitating hearing loss. The most common type of hearing loss in adults is idiopathic sudden sensorineural hearing loss (ISSHL. In the majority of cases, ISSHL is permanent and typically associated with loss of sensory hair cells in the organ of Corti. Following the loss of sensory hair cells, the auditory neurons undergo secondary degeneration. Sensory hair cells and auditory neurons do not regenerate throughout life, and loss of these cells is irreversible and cumulative. However, recent advances in stem cell biology have gained hope that stem cell therapy comes closer to regenerating sensory hair cells in humans. A major advance in the prospects for the use of stem cells to restore normal hearing comes with the recent discovery that hair cells can be generated ex vivo from embryonic stem (ES cells, adult inner ear stem cells and neural stem cells. Furthermore, there is increasing evidence that stem cells can promote damaged cell repair in part by secreting diffusible molecules such as growth factors. These results suggest that stem-cell-based treatment regimens can be applicable to the damaged inner ear as future clinical applications.Previously we have established an animal model of cochlear ischemia in gerbils and showed progressive hair cell loss up to 4 days after ischemia. Auditory brain stem response (ABR recordings have demonstrated that this gerbil model displays severe deafness just after cochlear ischemia and gradually recovers thereafter. These pathological findings and clinical manifestations are reminiscent of ISSHL in humans. In this study, we have shown the effectiveness of stem cell therapy by using this animal model of ISSHL.

  13. Response properties of the refractory auditory nerve fiber.

    Science.gov (United States)

    Miller, C A; Abbas, P J; Robinson, B K

    2001-09-01

    The refractory characteristics of auditory nerve fibers limit their ability to accurately encode temporal information. Therefore, they are relevant to the design of cochlear prostheses. It is also possible that the refractory property could be exploited by prosthetic devices to improve information transfer, as refractoriness may enhance the nerve's stochastic properties. Furthermore, refractory data are needed for the development of accurate computational models of auditory nerve fibers. We applied a two-pulse forward-masking paradigm to a feline model of the human auditory nerve to assess refractory properties of single fibers. Each fiber was driven to refractoriness by a single (masker) current pulse delivered intracochlearly. Properties of firing efficiency, latency, jitter, spike amplitude, and relative spread (a measure of dynamic range and stochasticity) were examined by exciting fibers with a second (probe) pulse and systematically varying the masker-probe interval (MPI). Responses to monophasic cathodic current pulses were analyzed. We estimated the mean absolute refractory period to be about 330 micros and the mean recovery time constant to be about 410 micros. A significant proportion of fibers (13 of 34) responded to the probe pulse with MPIs as short as 500 micros. Spike amplitude decreased with decreasing MPI, a finding relevant to the development of computational nerve-fiber models, interpretation of gross evoked potentials, and models of more central neural processing. A small mean decrement in spike jitter was noted at small MPI values. Some trends (such as spike latency-vs-MPI) varied across fibers, suggesting that sites of excitation varied across fibers. Relative spread was found to increase with decreasing MPI values, providing direct evidence that stochastic properties of fibers are altered under conditions of refractoriness.

  14. Cortical neurogenesis in adult rats after ischemic brain injury: most new neurons fail to mature

    Directory of Open Access Journals (Sweden)

    Qing-quan Li

    2015-01-01

    Full Text Available The present study examines the hypothesis that endogenous neural progenitor cells isolated from the neocortex of ischemic brain can differentiate into neurons or glial cells and contribute to neural regeneration. We performed middle cerebral artery occlusion to establish a model of cerebral ischemia/reperfusion injury in adult rats. Immunohistochemical staining of the cortex 1, 3, 7, 14 or 28 days after injury revealed that neural progenitor cells double-positive for nestin and sox-2 appeared in the injured cortex 1 and 3 days post-injury, and were also positive for glial fibrillary acidic protein. New neurons were labeled using bromodeoxyuridine and different stages of maturity were identified using doublecortin, microtubule-associated protein 2 and neuronal nuclei antigen immunohistochemistry. Immature new neurons coexpressing doublecortin and bromodeoxyuridine were observed in the cortex at 3 and 7 days post-injury, and semi-mature and mature new neurons double-positive for microtubule-associated protein 2 and bromodeoxyuridine were found at 14 days post-injury. A few mature new neurons coexpressing neuronal nuclei antigen and bromodeoxyuridine were observed in the injured cortex 28 days post-injury. Glial fibrillary acidic protein/bromodeoxyuridine double-positive astrocytes were also found in the injured cortex. Our findings suggest that neural progenitor cells are present in the damaged cortex of adult rats with cerebral ischemic brain injury, and that they differentiate into astrocytes and immature neurons, but most neurons fail to reach the mature stage.

  15. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  16. Auditory and visual evoked potentials during hyperoxia

    Science.gov (United States)

    Smith, D. B. D.; Strawbridge, P. J.

    1974-01-01

    Experimental study of the auditory and visual averaged evoked potentials (AEPs) recorded during hyperoxia, and investigation of the effect of hyperoxia on the so-called contingent negative variation (CNV). No effect of hyperoxia was found on the auditory AEP, the visual AEP, or the CNV. Comparisons with previous studies are discussed.

  17. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  18. Bilateral duplication of the internal auditory canal

    International Nuclear Information System (INIS)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu; Koo, Ja-Won

    2007-01-01

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  19. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  20. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  1. Probing the lifetimes of auditory novelty detection processes.

    Science.gov (United States)

    Pegado, Felipe; Bekinschtein, Tristan; Chausson, Nicolas; Dehaene, Stanislas; Cohen, Laurent; Naccache, Lionel

    2010-08-01

    Auditory novelty detection can be fractionated into multiple cognitive processes associated with their respective neurophysiological signatures. In the present study we used high-density scalp event-related potentials (ERPs) during an active version of the auditory oddball paradigm to explore the lifetimes of these processes by varying the stimulus onset asynchrony (SOA). We observed that early MMN (90-160 ms) decreased when the SOA increased, confirming the evanescence of this echoic memory system. Subsequent neural events including late MMN (160-220 ms) and P3a/P3b components of the P3 complex (240-500 ms) did not decay with SOA, but showed a systematic delay effect supporting a two-stage model of accumulation of evidence. On the basis of these observations, we propose a distinction within the MMN complex of two distinct events: (1) an early, pre-attentive and fast-decaying MMN associated with generators located within superior temporal gyri (STG) and frontal cortex, and (2) a late MMN more resistant to SOA, corresponding to the activation of a distributed cortical network including fronto-parietal regions. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. A dominance hierarchy of auditory spatial cues in barn owls.

    Directory of Open Access Journals (Sweden)

    Ilana B Witten

    2010-04-01

    Full Text Available Barn owls integrate spatial information across frequency channels to localize sounds in space.We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals, which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.We argue that the dominance hierarchy of localization cues reflects several factors: 1 the relative amplitude of the sound providing the cue, 2 the resolution with which the auditory system measures the value of a cue, and 3 the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.

  3. Towards a neural basis of music perception.

    Science.gov (United States)

    Koelsch, Stefan; Siebel, Walter A

    2005-12-01

    Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located.

  4. Musicians' Enhanced Neural Differentiation of Speech Sounds Arises Early in Life: Developmental Evidence from Ages 3 to 30

    Science.gov (United States)

    Strait, Dana L.; O'Connell, Samantha; Parbery-Clark, Alexandra; Kraus, Nina

    2014-01-01

    The perception and neural representation of acoustically similar speech sounds underlie language development. Music training hones the perception of minute acoustic differences that distinguish sounds; this training may generalize to speech processing given that adult musicians have enhanced neural differentiation of similar speech syllables compared with nonmusicians. Here, we asked whether this neural advantage in musicians is present early in life by assessing musically trained and untrained children as young as age 3. We assessed auditory brainstem responses to the speech syllables /ba/ and /ga/ as well as auditory and visual cognitive abilities in musicians and nonmusicians across 3 developmental time-points: preschoolers, school-aged children, and adults. Cross-phase analyses objectively measured the degree to which subcortical responses differed to these speech syllables in musicians and nonmusicians for each age group. Results reveal that musicians exhibit enhanced neural differentiation of stop consonants early in life and with as little as a few years of training. Furthermore, the extent of subcortical stop consonant distinction correlates with auditory-specific cognitive abilities (i.e., auditory working memory and attention). Results are interpreted according to a corticofugal framework for auditory learning in which subcortical processing enhancements are engendered by strengthened cognitive control over auditory function in musicians. PMID:23599166

  5. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  6. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  7. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  8. Auditory-motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development.

    Science.gov (United States)

    Terband, H; Maassen, B; Guenther, F H; Brumberg, J

    2014-01-01

    Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis.

    Science.gov (United States)

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN.

  10. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  11. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  12. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  13. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    Science.gov (United States)

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  14. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    Science.gov (United States)

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  15. Electrophysiological assessment of auditory processing disorder in children with non-syndromic cleft lip and/or palate.

    Science.gov (United States)

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-01-01

    Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P). However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft disorders. The present study utilized electrophysiological tests to assess the auditory status of a large group of children with NSCL/P, and investigated whether this group had less robust central auditory processing abilities compared to craniofacially normal children. 146 children with NSCL/P who had normal peripheral hearing thresholds, and 60 craniofacially normal children aged from 6 to 15 years, were recruited. Electrophysiological tests, including auditory brainstem response (ABR), P1-N1-P2 complex, and P300 component recording, were conducted. ABR and N1 wave latencies were significantly prolonged in children with NSCL/P. An atypical developmental trend was found for long latency potentials in children with cleft compared to control group children. Children with unilateral cleft lip and palate showed a greater level of abnormal results compared with other cleft subgroups, whereas the cleft lip subgroup had the most robust responses for all tests. Children with NSCL/P may have slower than normal neural transmission times between the peripheral auditory nerve and brainstem. Possible delayed development of myelination and synaptogenesis may also influence auditory processing function in this population. Present research outcomes were consistent with previous, smaller sample size, electrophysiological studies on infants and children with cleft lip/palate disorders. In view of the these findings, and reports of educational disadvantage associated with cleft disorders, further research

  16. Artificial Neural Networks For Hadron Hadron Cross-sections

    International Nuclear Information System (INIS)

    ELMashad, M.; ELBakry, M.Y.; Tantawy, M.; Habashy, D.M.

    2011-01-01

    In recent years artificial neural networks (ANN ) have emerged as a mature and viable framework with many applications in various areas. Artificial neural networks theory is sometimes used to refer to a branch of computational science that uses neural networks as models to either simulate or analyze complex phenomena and/or study the principles of operation of neural networks analytically. In this work a model of hadron- hadron collision using the ANN technique is present, the hadron- hadron based ANN model calculates the cross sections of hadron- hadron collision. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  17. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  18. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  19. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  20. Role of SDF1/CXCR4 Interaction in Experimental Hemiplegic Models with Neural Cell Transplantation

    Directory of Open Access Journals (Sweden)

    Noboru Suzuki

    2012-02-01

    Full Text Available Much attention has been focused on neural cell transplantation because of its promising clinical applications. We have reported that embryonic stem (ES cell derived neural stem/progenitor cell transplantation significantly improved motor functions in a hemiplegic mouse model. It is important to understand the molecular mechanisms governing neural regeneration of the damaged motor cortex after the transplantation. Recent investigations disclosed that chemokines participated in the regulation of migration and maturation of neural cell grafts. In this review, we summarize the involvement of inflammatory chemokines including stromal cell derived factor 1 (SDF1 in neural regeneration after ES cell derived neural stem/progenitor cell transplantation in mouse stroke models.

  1. The effect of noise exposure during the developmental period on the function of the auditory system.

    Science.gov (United States)

    Bureš, Zbyněk; Popelář, Jiří; Syka, Josef

    2017-09-01

    Recently, there has been growing evidence that development and maturation of the auditory system depends substantially on the afferent activity supplying inputs to the developing centers. In cases when this activity is altered during early ontogeny as a consequence of, e.g., an unnatural acoustic environment or acoustic trauma, the structure and function of the auditory system may be severely affected. Pathological alterations may be found in populations of ribbon synapses of the inner hair cells, in the structure and function of neuronal circuits, or in auditory driven behavioral and psychophysical performance. Three characteristics of the developmental impairment are of key importance: first, they often persist to adulthood, permanently influencing the quality of life of the subject; second, their manifestations are different and sometimes even contradictory to the impairments induced by noise trauma in adulthood; third, they may be 'hidden' and difficult to diagnose by standard audiometric procedures used in clinical practice. This paper reviews the effects of early interventions to the auditory system, in particular, of sound exposure during ontogeny. We summarize the results of recent morphological, electrophysiological, and behavioral experiments, discuss the putative mechanisms and hypotheses, and draw possible consequences for human neonatal medicine and noise health. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  3. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  4. Slab replacement maturity guidelines : [summary].

    Science.gov (United States)

    2014-04-01

    Concrete sets in hours at moderate temperatures, : but the bonds that make concrete strong continue : to mature over days to years. However, for : replacement concrete slabs on highways, it is : crucial that concrete develop enough strength : within ...

  5. SOUL System Maturation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek Co. Inc. proposes to advance the maturity of an innovative Spacecraft on Umbilical Line (SOUL) System suitable for a wide variety of applications of interest...

  6. SOUL System Maturation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek Co. Inc. proposes to advance the maturity of an innovative Spacecraft on Umbilical Line (SOUL) System suitable for a wide variety of applications of interest...

  7. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    Science.gov (United States)

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  8. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Science.gov (United States)

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  9. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Directory of Open Access Journals (Sweden)

    R Channing Moore

    Full Text Available Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  10. Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex.

    Science.gov (United States)

    Schwartz, Zachary P; David, Stephen V

    2018-01-01

    Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evok