WorldWideScience

Sample records for campbelli auditory laterality

  1. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  2. Social and emotional values of sounds influence human (Homo sapiens) and non-human primate (Cercopithecus campbelli) auditory laterality.

    Science.gov (United States)

    Basile, Muriel; Lemasson, Alban; Blois-Heulin, Catherine

    2009-07-17

    The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  3. Auditory lateralization of conspecific and heterospecific vocalizations in cats.

    Science.gov (United States)

    Siniscalchi, Marcello; Laddago, Serena; Quaranta, Angelo

    2016-01-01

    Auditory lateralization in response to both conspecific and heterospecific vocalizations (dog vocalizations) was observed in 16 tabby cats (Felis catus). Six different vocalizations were used: cat "purring," "meowing" and "growling" and dog typical vocalizations of "disturbance," "isolation" and "play." The head-orienting paradigm showed that cats turned their head with the right ear leading (left hemisphere activation) in response to their typical-species vocalization ("meow" and "purring"); on the other hand, a clear bias in the use of the left ear (right hemisphere activation) was observed in response to vocalizations eliciting intense emotion (dogs' vocalizations of "disturbance" and "isolation"). Overall these findings suggest that auditory sensory domain seems to be lateralized also in cat species, stressing the role of the left hemisphere for intraspecific communication and of the right hemisphere in processing threatening and alarming stimuli.

  4. Conductive Hearing Loss during Infancy: Effects on Later Auditory Brain Stem Electrophysiology.

    Science.gov (United States)

    Gunnarson, Adele D.; Finitzo, Terese

    1991-01-01

    Long-term effects on auditory electrophysiology from early fluctuating hearing loss were studied in 27 children, aged 5 to 7 years, who had been evaluated originally in infancy. Findings suggested that early fluctuating hearing loss disrupts later auditory brain stem electrophysiology. (Author/DB)

  5. Lateralization of functional magnetic resonance imaging (fMRI) activation in the auditory pathway of patients with lateralized tinnitus

    Energy Technology Data Exchange (ETDEWEB)

    Smits, Marion [Erasmus MC - University Medical Center Rotterdam, Department of Radiology, Hs 224, Rotterdam (Netherlands); Kovacs, Silvia; Peeters, Ronald R.; Hecke, Paul van; Sunaert, Stefan [University Hospitals of the Catholic University Leuven, Department of Radiology, Leuven (Belgium); Ridder, Dirk de [University of Antwerp, Department of Neurosurgery, Edegem (Belgium)

    2007-08-15

    Tinnitus is hypothesized to be an auditory phantom phenomenon resulting from spontaneous neuronal activity somewhere along the auditory pathway. We performed fMRI of the entire auditory pathway, including the inferior colliculus (IC), the medial geniculate body (MGB) and the auditory cortex (AC), in 42 patients with tinnitus and 10 healthy volunteers to assess lateralization of fMRI activation. Subjects were scanned on a 3T MRI scanner. A T2*-weighted EPI silent gap sequence was used during the stimulation paradigm, which consisted of a blocked design of 12 epochs in which music presented binaurally through headphones, which was switched on and off for periods of 50 s. Using SPM2 software, single subject and group statistical parametric maps were calculated. Lateralization of activation was assessed qualitatively and quantitatively. Tinnitus was lateralized in 35 patients (83%, 13 right-sided and 22 left-sided). Significant signal change (P{sub corrected} < 0.05) was found bilaterally in the primary and secondary AC, the IC and the MGB. Signal change was symmetrical in patients with bilateral tinnitus. In patients with lateralized tinnitus, fMRI activation was lateralized towards the side of perceived tinnitus in the primary AC and IC in patients with right-sided tinnitus, and in the MGB in patients with left-sided tinnitus. In healthy volunteers, activation in the primary AC was left-lateralized. Our paradigm adequately visualized the auditory pathways in tinnitus patients. In lateralized tinnitus fMRI activation was also lateralized, supporting the hypothesis that tinnitus is an auditory phantom phenomenon. (orig.)

  6. Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions

    OpenAIRE

    de la Mothe, Lisa A.; Blumell, Suzanne; Kajikawa, Yoshinao; Hackett, Troy A.

    2012-01-01

    The current working model of primate auditory cortex is constructed from a number of studies of both New and Old World monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organiza...

  7. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    the location of the lateral belt and parabelt with respect to gross anatomical landmarks. Architectonic criteria for the core, lateral belt, and parabelt were readily adapted from monkey to human. Additionally, we found evidence for an architectonic subdivision within the parabelt, present in both species......The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...... that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  8. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    OpenAIRE

    Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...

  9. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    OpenAIRE

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...

  10. Asymmetric lateral inhibitory neural activity in the auditory system: a magnetoencephalographic study

    Directory of Open Access Journals (Sweden)

    Gunji Atsuko

    2007-05-01

    Full Text Available Abstract Background Decrements of auditory evoked responses elicited by repeatedly presented sounds with similar frequencies have been well investigated by means of electroencephalography and magnetoencephalography (MEG. However the possible inhibitory interactions between different neuronal populations remains poorly understood. In the present study, we investigated the effect of proceeding notch-filtered noises (NFNs with different frequency spectra on a following test tone using MEG. Results Three-second exposure to the NFNs resulted in significantly different N1m responses to a 1000 Hz test tone presented 500 ms after the offset of the NFNs. The NFN with a lower spectral edge closest to the test tone mostly decreased the N1m amplitude. Conclusion The decrement of the N1m component after exposure to the NFNs could be explained partly in terms of lateral inhibition. The results demonstrated that the amplitude of the N1m was more effectively influenced by inhibitory lateral connections originating from neurons corresponding to lower rather than higher frequencies. We interpret this effect of asymmetric lateral inhibition in the auditory system as an important contribution to reduce the asymmetric neural activity profiles originating from the cochlea.

  11. Impaired Facilitatory Mechanisms of Auditory Attention After Damage of the Lateral Prefrontal Cortex

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Buchanan, Kelly G.; Viswanath, Humsini; Black, Jessica; Scabini, Donatella; Bonnet-Brilhault, Frédérique; Knight, Robert T.

    2015-01-01

    There is growing evidence that auditory selective attention operates via distinct facilitatory and inhibitory mechanisms enabling selective enhancement and suppression of sound processing, respectively. The lateral prefrontal cortex (LPFC) plays a crucial role in the top-down control of selective attention. However, whether the LPFC controls facilitatory, inhibitory, or both attentional mechanisms is unclear. Facilitatory and inhibitory mechanisms were assessed, in patients with LPFC damage, by comparing event-related potentials (ERPs) to attended and ignored sounds with ERPs to these same sounds when attention was equally distributed to all sounds. In control subjects, we observed 2 late frontally distributed ERP components: a transient facilitatory component occurring from 150 to 250 ms after sound onset; and an inhibitory component onsetting at 250 ms. Only the facilitatory component was affected in patients with LPFC damage: this component was absent when attending to sounds delivered in the ear contralateral to the lesion, with the most prominent decreases observed over the damaged brain regions. These findings have 2 important implications: (i) they provide evidence for functionally distinct facilitatory and inhibitory mechanisms supporting late auditory selective attention; (ii) they show that the LPFC is involved in the control of the facilitatory mechanisms of auditory attention. PMID:24925773

  12. Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions.

    Science.gov (United States)

    de la Mothe, Lisa A; Blumell, Suzanne; Kajikawa, Yoshinao; Hackett, Troy A

    2012-05-01

    The current working model of primate auditory cortex is constructed from a number of studies of both new and old world monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organization has not been fully established. In previous studies, we focused on the architecture and connections of the core and medial belt areas (de la Mothe et al., 2006a, J Comp Neurol 496:27-71; de la Mothe et al., 2006b, J Comp Neurol 496:72-96). In this study, the corticocortical connections of the lateral belt and parabelt were examined in the marmoset. Tracers were injected into both rostral and caudal portions of the lateral belt and parabelt. Both regions revealed topographic connections along the rostrocaudal axis, where caudal areas of injection had stronger connections with caudal areas, and rostral areas of injection with rostral areas. The lateral belt had strong connections with the core, belt, and parabelt, whereas the parabelt had strong connections with the belt but not the core. Label in the core from injections in the parabelt was significantly reduced or absent, consistent with the idea that the parabelt relies mainly on the belt for its cortical input. In addition, the present and previous studies indicate hierarchical principles of anatomical organization in the marmoset that are consistent with those observed in other primates.

  13. Aberrant lateralization of brainstem auditory evoked responses by individuals with Down syndrome.

    Science.gov (United States)

    Miezejeski, C M; Heaney, G; Belser, R; Sersen, E A

    1994-01-01

    Brainstem auditory evoked response latencies were studied in 80 males (13 with Down syndrome, 23 with developmental disability due to other causes, and 44 with no disability). Latencies for waves P3 and P5 were shorter for the Down syndrome than for the other groups, though at P5, as compared to latencies for the nondisabled group, the difference was not significant. The pattern of left versus right ear responses in the Down syndrome group differed from those of the other groups. This finding was related to research noting decreased lateralization of and decreased ability at receptive and expressive language among people with Down syndrome. Some individuals required sedation. A lateralized effect of sedation was noted.

  14. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    Directory of Open Access Journals (Sweden)

    Hendrik eSantosa

    2014-12-01

    Full Text Available The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard. If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%, the entire subjects showed the right-hemispheric lateralization: This is due to the subjects’ effort to hear the music in the presence of noises. However, too much noise has reduced the subjects’ discerning efforts.

  15. An organization of visual and auditory fear conditioning in the lateral amygdala.

    Science.gov (United States)

    Bergstrom, Hadley C; Johnson, Luke R

    2014-12-01

    Pavlovian fear conditioning is an evolutionary conserved and extensively studied form of associative learning and memory. In mammals, the lateral amygdala (LA) is an essential locus for Pavlovian fear learning and memory. Despite significant progress unraveling the cellular mechanisms responsible for fear conditioning, very little is known about the anatomical organization of neurons encoding fear conditioning in the LA. One key question is how fear conditioning to different sensory stimuli is organized in LA neuronal ensembles. Here we show that Pavlovian fear conditioning, formed through either the auditory or visual sensory modality, activates a similar density of LA neurons expressing a learning-induced phosphorylated extracellular signal-regulated kinase (p-ERK1/2). While the size of the neuron population specific to either memory was similar, the anatomical distribution differed. Several discrete sites in the LA contained a small but significant number of p-ERK1/2-expressing neurons specific to either sensory modality. The sites were anatomically localized to different levels of the longitudinal plane and were independent of both memory strength and the relative size of the activated neuronal population, suggesting some portion of the memory trace for auditory and visually cued fear conditioning is allocated differently in the LA. Presenting the visual stimulus by itself did not activate the same p-ERK1/2 neuron density or pattern, confirming the novelty of light alone cannot account for the specific pattern of activated neurons after visual fear conditioning. Together, these findings reveal an anatomical distribution of visual and auditory fear conditioning at the level of neuronal ensembles in the LA.

  16. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  17. Between- and within-Ear Congruency and Laterality Effects in an Auditory Semantic/Emotional Prosody Conflict Task

    Science.gov (United States)

    Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.

    2009-01-01

    The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…

  18. Albino and pink-eyed dilution mutants in the Russian dwarf hamster Phodopus campbelli.

    Science.gov (United States)

    Robinson, R

    1996-01-01

    The coat color mutant genes albino (c) and pink eyed dilution (p) are described in the dwarf hamster species Phodopus campbelli. Both genes are inherited as redessive to normal. Tests for linkage between the two genes gave negative results. The apparent absence of linkage is contrasted with linkage between homologous alleles c and p in other species of rodents.

  19. Modulatory effects of spectral energy contrasts on lateral inhibition in the human auditory cortex: an MEG study.

    Directory of Open Access Journals (Sweden)

    Alwina Stein

    Full Text Available We investigated the modulation of lateral inhibition in the human auditory cortex by means of magnetoencephalography (MEG. In the first experiment, five acoustic masking stimuli (MS, consisting of noise passing through a digital notch filter which was centered at 1 kHz, were presented. The spectral energy contrasts of four MS were modified systematically by either amplifying or attenuating the edge-frequency bands around the notch (EFB by 30 dB. Additionally, the width of EFB amplification/attenuation was varied (3/8 or 7/8 octave on each side of the notch. N1m and auditory steady state responses (ASSR, evoked by a test stimulus with a carrier frequency of 1 kHz, were evaluated. A consistent dependence of N1m responses upon the preceding MS was observed. The minimal N1m source strength was found in the narrowest amplified EFB condition, representing pronounced lateral inhibition of neurons with characteristic frequencies corresponding to the center frequency of the notch (NOTCH CF in secondary auditory cortical areas. We tested in a second experiment whether an even narrower bandwidth of EFB amplification would result in further enhanced lateral inhibition of the NOTCH CF. Here three MS were presented, two of which were modified by amplifying 1/8 or 1/24 octave EFB width around the notch. We found that N1m responses were again significantly smaller in both amplified EFB conditions as compared to the NFN condition. To our knowledge, this is the first study demonstrating that the energy and width of the EFB around the notch modulate lateral inhibition in human secondary auditory cortical areas. Because it is assumed that chronic tinnitus is caused by a lack of lateral inhibition, these new insights could be used as a tool for further improvement of tinnitus treatments focusing on the lateral inhibition of neurons corresponding to the tinnitus frequency, such as the tailor-made notched music training.

  20. A lateralized functional auditory network is involved in anuran sexual selection

    Indian Academy of Sciences (India)

    FEI XUE; GUANGZHAN FANG; XIZI YUE; ERMI ZHAO; STEVEN E BRAUTH; YEZHONG TANG

    2016-12-01

    Right ear advantage (REA) exists in many land vertebrates in which the right ear and left hemisphere preferentiallyprocess conspecific acoustic stimuli such as those related to sexual selection. Although ecological and neural mechanismsfor sexual selection have been widely studied, the brain networks involved are still poorly understood. In this study weused multi-channel electroencephalographic data in combination with Granger causal connectivity analysis to demonstrate,for the first time, that auditory neural network interconnecting the left and right midbrain and forebrain functionasymmetrically in the Emei music frog (Babina daunchina), an anuran species which exhibits REA. The results showedthe network was lateralized. Ascending connections between the mesencephalon and telencephalon were stronger in theleft side while descending ones were stronger in the right, which matched with the REA in this species and implied thatinhibition from the forebrainmay induce REA partly. Connections from the telencephalon to ipsilateral mesencephalon inresponse to white noise were the highest in the non-reproductive stage while those to advertisement calls were the highestin reproductive stage, implying the attention resources and living strategy shift when entered the reproductive season.Finally, these connection changes were sexually dimorphic, revealing sex differences in reproductive roles.

  1. ERP Indications for Sustained and Transient Auditory Spatial Attention with Different Lateralization Cues

    Science.gov (United States)

    Widmann, Andreas; Schröger, Erich

    The presented study was designed to investigate ERP effects of auditory spatial attention in sustained attention condition (where the to-be-attended location is defined in a blockwise manner) and in a transient attention condition (where the to-be-attended location is defined in a trial-by-trial manner). Lateralization in the azimuth plane was manipulated (a) via monaural presentation of l- and right-ear sounds, (b) via interaural intensity differences, (c) via interaural time differences, (d) via an artificial-head recording, and (e) via free-field stimulation. Ten participants were delivered with frequent Nogo- and infrequent Go-Stimuli. In one half of the experiment participants were instructed to press a button if they detected a Go-stimulus at a predefined side (sustained attention), in the other half they were required to detect Go-stimuli following an arrow-cue at the cued side (transient attention). Results revealed negative differences (Nd) between ERPs elicited by to-be-attended and to-be-ignored sounds in all conditions. These Nd-effects were larger for the sustained than for the transient attention condition indicating that attentional selection according to spatial criteria is improved when subjects can focus to one and the same location for a series of stimuli.

  2. Representation of lateralization and tonotopy in primary versus secondary human auditory cortex

    NARCIS (Netherlands)

    Langers, Dave R. M.; Backes, Walter H.; van Dijk, Pim

    2007-01-01

    Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateratization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cort

  3. Adapting to alcohol: Dwarf hamster (Phodopus campbelli) ethanol consumption, sensitivity, and hoard fermentation.

    Science.gov (United States)

    Lupfer, Gwen; Murphy, Eric S; Merculieff, Zoe; Radcliffe, Kori; Duddleston, Khrystyne N

    2015-06-01

    Ethanol consumption and sensitivity in many species are influenced by the frequency with which ethanol is encountered in their niches. In Experiment 1, dwarf hamsters (Phodopus campbelli) with ad libitum access to food and water consumed high amounts of unsweetened alcohol solutions. Their consumption of 15%, but not 30%, ethanol was reduced when they were fed a high-fat diet; a high carbohydrate diet did not affect ethanol consumption. In Experiment 2, intraperitoneal injections of ethanol caused significant dose-related motor impairment. Much larger doses administered orally, however, had no effect. In Experiment 3, ryegrass seeds, a common food source for wild dwarf hamsters, supported ethanol fermentation. Results of these experiments suggest that dwarf hamsters may have adapted to consume foods in which ethanol production naturally occurs.

  4. The EPIC model of functional asymmetries: implications for research on laterality in the auditory and other systems.

    Science.gov (United States)

    Lauter, Judith L

    2007-05-01

    More than a century after it was first suggested that behaviors such as speech production and perception might be lateralized in the human brain, many basic questions still remain regarding the nature and basis of right-left functional asymmetries (FAs). The lack of answers to what seem to be a straightforward set of questions may be due to two methodological aspects of laterality research which have hampered work in brain and behavior in general and lateralities in particular. The first is the absence of a biologically based, psychophysically-defined taxonomy of stimulus/gesture features for use in tests of laterality. As a result, many researchers resort to cognitive constructs for describing the bases of asymmetries, a decision which has created a gulf separating experimental as well as theoretical work on asymmetries from the biological realities of sensory and motor processing, within the brain as well as at the body periphery. The second obstacle is the lack of a valid taxonomy for individuals. Individual differences are ubiquitous in human subjects as well as non-human animals, yet are typically averaged away as noise rather than respected as possible sources of information. Studies of asymmetry often reveal dramatic individual differences in both the direction and magnitude of asymmetries, yet, when subjected to averaging and group statistics, these often result in insignificant results. This paper reviews two taxonomies which may address some of these problems: the EPIC Model of Functional Asymmetries, and the related Trimodal Model of Brain Organization. The EPIC model classifies functional asymmetries according to four domains (Extrapersonal space, Peripersonal space, Intrapersonal space, and Coordination) and assigns responsibility for these four domains differentially to the two cerebral hemispheres--the right side is seen as "polypotent," responsible for processing in three of the domains, while the left side focuses on processing in peripersonal

  5. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study.

    Science.gov (United States)

    van Zuijen, Titia L; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-10-18

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or without familial risk for dyslexia to investigate whether their auditory system was able to detect a temporal change in a tone pattern. The children were followed longitudinally and performed an intelligence- and language development test at ages 4 and 4.5 years. Literacy related skills were measured at the beginning of second grade, and word- and pseudo-word reading fluency were measured at the end of second grade. The EEG responses showed that control children could detect the temporal change as indicated by a mismatch response (MMR). The MMR was not observed in at-risk children. Furthermore, the fronto-central MMR amplitude correlated with preliterate language comprehension and with later word reading fluency, but not with phonological awareness. We conclude that temporal auditory processing differentiates young children at risk for dyslexia from controls and is a precursor of preliterate language comprehension and reading fluency.

  6. Synaptic Plasticity and NO-cGMP-PKG Signaling Coordinately Regulate ERK-Driven Gene Expression in the Lateral Amygdala and in the Auditory Thalamus Following Pavlovian Fear Conditioning

    Science.gov (United States)

    Ota, Kristie T.; Monsey, Melissa S.; Wu, Melissa S.; Young, Grace J.; Schafe, Glenn E.

    2010-01-01

    We have recently hypothesized that NO-cGMP-PKG signaling in the lateral nucleus of the amygdala (LA) during auditory fear conditioning coordinately regulates ERK-driven transcriptional changes in both auditory thalamic (MGm/PIN) and LA neurons that serve to promote pre- and postsynaptic alterations at thalamo-LA synapses, respectively. In the…

  7. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling.

  8. Abnormal pairing of X and Y sex chromosomes during meiosis I in interspecific hybrids of Phodopus campbelli and P. sungorus.

    Science.gov (United States)

    Ishishita, Satoshi; Tsuboi, Kazuma; Ohishi, Namiko; Tsuchiya, Kimiyuki; Matsuda, Yoichi

    2015-03-24

    Hybrid sterility plays an important role in the maintenance of species identity and promotion of speciation. Male interspecific hybrids from crosses between Campbell's dwarf hamster (Phodopus campbelli) and the Djungarian hamster (P. sungorus) exhibit sterility with abnormal spermatogenesis. However, the meiotic phenotype of these hybrids has not been well described. In the present work, we observed the accumulation of spermatocytes and apoptosis of spermatocyte-like cells in the testes of hybrids between P. campbelli females and P. sungorus males. In hybrid spermatocytes, a high frequency of asynapsis of X and Y chromosomes during the pachytene-like stage and dissociation of these chromosomes during metaphase I (MI) was observed. No autosomal univalency was observed during pachytene-like and MI stages in the hybrids; however, a low frequency of synapsis between autosomes and X or Y chromosomes, interlocking and partial synapsis between autosomal pairs, and γ-H2AFX staining in autosomal chromatin was observed during the pachytene-like stage. Degenerated MI-like nuclei were frequently observed in the hybrids. Most of the spermatozoa in hybrid epididymides exhibited head malformation. These results indicate that the pairing of X and Y chromosomes is more adversely affected than that of autosomes in Phodopus hybrids.

  9. Embryo cryopreservation and in vitro culture of preimplantation embryos in Campbell's hamster (Phodopus campbelli).

    Science.gov (United States)

    Amstislavsky, Sergei; Brusentsev, Eugeny; Kizilova, Elena; Igonina, Tatyana; Abramova, Tatyana; Rozhkova, Irina

    2015-04-01

    The aims of this study were to compare different protocols of Campbell's hamster (Phodopus campbelli) embryos freezing-thawing and to explore the possibilities of their in vitro culture. First, the embryos were flushed from the reproductive ducts 2 days post coitum at the two-cell stage and cultured in rat one-cell embryo culture medium (R1ECM) for 48 hours. Most (86.7%) of the two-cell embryos developed to blastocysts in R1ECM. Second, the embryos at the two- to eight-cell stages were flushed on the third day post coitum. The eight-cell embryos were frozen in 0.25 mL straws according to standard procedures of slow cooling. Ethylene glycol (EG) was used either as a single cryoprotectant or in a mixture with sucrose. The survival of frozen-thawed embryos was assessed by double staining with fluorescein diacetate and propidium iodide. The use of EG as a single cryoprotectant resulted in fewer alive embryos when compared with control (fresh embryos), but combined use of EG and sucrose improved the survival rate after thawing. Furthermore, granulocyte-macrophage colony-stimulating factor rat (2 ng/mL) improved the rate of the hamster frozen-thawed embryo development in vitro by increasing the final cell number and alleviating nuclear fragmentation. Our data show the first attempt in freezing and thawing Campbell's hamster embryos and report the possibility of successful in vitro culture for this species in R1ECM supplemented with granulocyte-macrophage colony-stimulating factor.

  10. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  11. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  12. The hemispheric lateralization of the auditory cortex after being stimulated by pure tone: a 1H-MRS study%听觉中枢纯音处理偏侧性质子磁共振波谱研究

    Institute of Scientific and Technical Information of China (English)

    梁永辉; 陈贤明; 陈自谦; 倪萍

    2011-01-01

    目的 利用质子磁共振波谱(proton magnetic resonance spectroscopy,1H-MRS)技术观察纯音刺激后正常人左右半球听皮层代谢物偏侧性变化.方法 12例健康受试者听皮层在纯音刺激前后各接受一次多体素磁共振波谱检查.刺激声音为声强90dB、频率1000Hz的正弦波纯音脉冲.观察双侧听皮层N-乙酰天门冬氨酸(NAA)、肌酸(Cr)、胆碱(Cho)、谷氨酰胺和谷氨酸(Glx)、GABA等代谢物的波峰变化,并进行半定量分析,比较刺激前后听皮层代谢物左右半球偏侧性变化.结果 纯音刺激后左侧听皮层NAA/(Cho+Cr)、GABA/Cr比值[分别为(1.28±0.14),(0.21±0.08)],高于刺激前[分别为(1.02±0.18),(0.10±0.05)],Glx/Cr比值[(0.03±0.02)]明显低于刺激前[(0.10±0.04)],差异均有统计学意义(P0.05);GABA/Cr比值[(0.01±0.11)]明显低于刺激前[(0.11±0.07)],差异有显著性(P0.05). There were statistically significant differences in the Glx/Cr ratio of the auditory cortex between two sides after being stimulated by the pure tone. Conclusion The metabolic lateralization exists in auditory cortex of normal human brain after being stimulated by the pure tone, which may be the bases of the functional asymmetry.

  13. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  14. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  15. Lateral Asymmetries in Human Evolution

    OpenAIRE

    John L. Bradshaw; Nettleton, Norman C.

    1989-01-01

    Lateral asymmetries are not confined to humans. Palaeozoic trilobites and calcichordates are now known to have been asymmetrical; song control in passerines is vested in the left cerebral hemisphere; learning which is lateralized to the left forebrain of chicks includes imprinting, visual discrimination learning and auditory habituation, while responses to novelty, attack and copulation are activated by the right; in rats the right hemisphere is involved in emotional behavior and spatial disc...

  16. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices.

  17. Cooperative dynamics in auditory brain response

    CERN Document Server

    Kwapien, J; Liu, L C; Ioannides, A A

    1998-01-01

    Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  18. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  19. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  20. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.

  1. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  2. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise.

  3. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  4. Onset dominance in lateralization.

    Science.gov (United States)

    Freyman, R L; Zurek, P M; Balakrishnan, U; Chiang, Y C

    1997-03-01

    Saberi and Perrott [Acustica 81, 272-275 (1995)] found that the in-head lateralization of a relatively long-duration pulse train could be controlled by the interaural delay of the single pulse pair that occurs at onset. The present study examined this further, using an acoustic pointer measure of lateralization, with stimulus manipulations designed to determine conditions under which lateralization was consistent with the interaural onset delay. The present stimuli were wideband pulse trains, noise-burst trains, and inharmonic complexes, 250 ms in duration, chosen for the ease with which interaural delays and correlations of select temporal segments of the stimulus could be manipulated. The stimulus factors studied were the periodicity of the ongoing part of the signal as well as the multiplicity and ambiguity of interaural delays. The results, in general, showed that the interaural onset delay controlled lateralization when the steady state binaural cues were relatively weak, either because the spectral components were only sparsely distributed across frequency or because the interaural time delays were ambiguous. Onset dominance can be disrupted by sudden stimulus changes within the train, and several examples of such changes are described. Individual subjects showed strong left-right asymmetries in onset effectiveness. The results have implications for understanding how onset and ongoing interaural delay cues contribute to the location estimates formed by the binaural auditory system.

  5. A Stem Cell-Seeded Nanofibrous Scaffold for Auditory Nerve Replacement

    Science.gov (United States)

    2013-10-01

    biopolymer scaffold within the internal auditory meatus (IAM) of the guinea pig. (A) The lateral wall of an intact guinea pig temporal bone is shown......Nanofibrous Scaffold for Auditory Nerve Replacement 5b. GRANT NUMBER W81XWH-12-1-0492 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Betty Diamond

  6. How Do Batters Use Visual, Auditory, and Tactile Information about the Success of a Baseball Swing?

    Science.gov (United States)

    Gray, Rob

    2009-01-01

    Bat/ball contact produces visual (the ball leaving the bat), auditory (the "crack" of the bat), and tactile (bat vibration) feedback about the success of the swing. We used a batting simulation to investigate how college baseball players use visual, tactile, and auditory feedback. In Experiment 1, swing accuracy (i.e., the lateral separation…

  7. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  8. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  9. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  10. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  11. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  12. Lateral Concepts

    Directory of Open Access Journals (Sweden)

    Christopher Gad

    2016-06-01

    Full Text Available This essay discusses the complex relation between the knowledges and practices of the researcher and his/her informants in terms of lateral concepts. The starting point is that it is not the prerogative of the (STS scholar to conceptualize the world; all our “informants” do it too. This creates the possibility of enriching our own conceptual repertoires by letting them be inflected by the concepts of those we study. In a broad sense, the lateral means that there is a many-to-many relation between domains of knowledge and practice. However, each specific case of the lateral is necessarily immanent to a particular empirical setting and form of inquiry. In this sense lateral concepts are radically empirical since it locates concepts within the field. To clarify the meaning and stakes of lateral concepts, we first make a contrast between lateral anthropology and Latour’s notion of infra-reflexivity. We end with a brief illustration and discussion of how lateral conceptualization can re-orient STS modes of inquiry, and why this matters.

  13. Auditory Neuropathy: Findings of Behavioral, Physiological and Neurophysiological Tests

    Directory of Open Access Journals (Sweden)

    Mohammad Farhadi

    2006-12-01

    Full Text Available Background and Aim: Auditory neuropathy (AN can be diagnosed by abnormal auditory brainstem response (ABR, in the presence of normal cochlear microphonic (CM and otoacoustic emissions (OAEs.The aim of this study was to investigate the ABR and other electrodiagnostic test results of 6 patients suspicious to AN with problems in speech recognition. Materials and Methods: this cross sectional study was conducted on 6 AN patients with different ages evaluated by pure tone audiometry, speech discrimination score (SDS , immittance audiometry. ElectroCochleoGraphy , ABR, middle latency response (MLR, Late latency response (LLR, and OAEs. Results: Behavioral pure tone audiometric tests showed moderate to profound hearing loss. SDS was so poor which is not in accordance with pure tone thresholds. All patients had normal tympanogram but absent acoustic reflexes. CMs and OAEs were within normal limits. There was no contra lateral suppression of OAEs. None of cases had normal ABR or MLR although LLR was recorded in 4. Conclusion: All patients in this study are typical cases of auditory neuropathy. Despite having abnormal input, LLR remains normal that indicates differences in auditory evoked potentials related to required neural synchrony. These findings show that auditory cortex may play a role in regulating presentation of deficient signals along auditory pathways in primary steps.

  14. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  15. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    Science.gov (United States)

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  17. Contextual modulation of primary visual cortex by auditory signals

    Science.gov (United States)

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  18. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  19. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... parameters highlighting harmonious and balanced qualities while criticizing the noisy and cacophonous qualities of modern urban settings. This paper present a reaffirmation of Schafer’s central methodological claim: that environments can be analyzed through their sound, but offers considerations on the role...... musicalized through electro acoustic equipment installed in shops, shopping streets, transit areas etc. Urban noise no longer acts only as disturbance, but also structure and shape the places and spaces in which urban life enfold. Based on research done in Japanese shopping streets and in Copenhagen the paper...

  20. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  1. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  2. Effect of stimulus hemifield on free-field auditory saltation.

    Science.gov (United States)

    Ishigami, Yoko; Phillips, Dennis P

    2008-07-01

    Auditory saltation is the orderly misperception of the spatial location of repetitive click stimuli emitted from two successive locations when the inter-click intervals (ICIs) are sufficiently short. The clicks are perceived as originating not only from the actual source locations, but also from locations between them. In two tasks, the present experiment compared free-field auditory saltation for 90 degrees excursions centered in the frontal, rear, left and right acoustic hemifields, by measuring the ICI at which subjects report 50% illusion strength (subjective task) and the ICI at which subjects could not distinguish real motion from saltation (objective task). A comparison of the saltation illusion for excursions spanning the midline (i.e. for frontal or rear hemifields) with that for stimuli in the lateral hemifields (left or right) revealed that the illusion was weaker for the midline-straddling conditions (i.e. the illusion was restricted to shorter ICIs). This may reflect the contribution of two perceptual channels to the task in the midline conditions (as opposed to one in the lateral hemifield conditions), or the fact that the temporal dynamics of localization differ between the midline and lateral hemifield conditions. A subsidiary comparison of saltation supported in the left and right auditory hemifields, and therefore by the right and left auditory forebrains, revealed no difference.

  3. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  4. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  5. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  6. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  7. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  8. Lateral Mixing

    Science.gov (United States)

    2012-11-08

    being made on their analysis. A process we became very curious about was the separation of tendrils of warm salty water from the north wall figure 7...structure, and to remove the effect of internal waves by mapping this structure onto isopycnals. This has been very successful in elucidating lateral...we passed through the same water on multiple passes, and that changes in the horizontal structure of the water mas should be readily apparent from

  9. CT findings of the osteoma of the external auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ha Young; Song, Chang Joon; Yoon, Chung Dae; Park, Mi Hyun; Shin, Byung Seok [Chungnam National University, School of Medicine, Daejeon (Korea, Republic of)

    2006-07-15

    We wanted to report the CT image findings of the osteoma of the external auditory canal. Temporal bone CT scanning was performed on eight patients (4 males and 4 females aged between 8 and 41 years) with pathologically proven osteoma of the external auditory canal after operation, and the findings of the CT scanning were retrospectively reviewed. Not only did we analyze the size, shape, distribution and location of the osteomas, we also analyzed the relationship between the lesion and the tympanosqumaous or tympanomastoid suture line, and the changes seen on the CT scan images for the patients who were able to undergo follow-up. All the lesions of the osteoma of the external auditory canal were unilateral, solitary, pedunculated bony masses. In five patients, the osteomas occurred on the left side and for the other three patients, the osteomas occurred on the right side. The average size of the osteoma was 0.6 cm with the smallest being 0.5 cm and the largest being 1.2 cm. Each of the lesions was located at the osteochondral junction in the terminal part of the osseous external ear canal. The stalk of the osteoma of the external auditory canal was found to have occurred in the anteroinferior wall in five cases (63%), in the anterosuperior wall (the tympanosqumaous suture line) in two cases (25%), and in the anterior wall in one case. The osteoma of the external auditory canal was a compact form in five cases and it was a cancellous form in three cases. One case of the cancellous form was changed into a compact form 35 months later due to the advanced ossification. Osteoma of the external auditory canal developed in a unilateral and solitary fashion. The characteristic image findings show that it is attached to the external auditory canal by its stalk. Unlike our common knowledge about its occurrence, osteoma mostly occurred in the tympanic wall, and this is regardless of the tympanosquamous or tympanomastoid suture line.

  10. Training-induced plasticity of auditory localization in adult mammals.

    Directory of Open Access Journals (Sweden)

    Oliver Kacelnik

    2006-04-01

    Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.

  11. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  12. Exploration of auditory P50 gating in schizophrenia by way of difference waves

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2006-01-01

    ABSTRACT : Electroencephalographic measures of information processing encompass both mid-latency evoked potentials like the pre-attentive auditory P50 potential and a host of later more cognitive components like P300 and N400.Difference waves have mostly been employed in studies of later event...

  13. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    Directory of Open Access Journals (Sweden)

    Francisco Jose Alvarez

    Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  14. Central projections of auditory receptor neurons of crickets.

    Science.gov (United States)

    Imaizumi, Kazuo; Pollack, Gerald S

    2005-12-19

    We describe the central projections of physiologically characterized auditory receptor neurons of crickets as revealed by confocal microscopy. Receptors tuned to ultrasonic frequencies (similar to those produced by echolocating, insectivorous bats), to a mid-range of frequencies, and a subset of those tuned to low, cricket-like frequencies have similar projections, terminating medially within the auditory neuropile. Quantitative analysis shows that despite the general similarity of these projections they are tonotopic, with receptors tuned to lower frequencies terminating more medially. Another subset of cricket-song-tuned receptors projects more laterally and posteriorly than the other types. Double-fills of receptors and identified interneurons show that the three medially projecting receptor types are anatomically well positioned to provide monosynaptic input to interneurons that relay auditory information to the brain and to interneurons that modify this ascending information. The more laterally and posteriorly branching receptor type may not interact directly with this ascending pathway, but is well positioned to provide direct input to an interneuron that carries auditory information to more posterior ganglia. These results suggest that information about cricket song is segregated into functionally different pathways as early as the level of receptor neurons. Ultrasound-tuned and mid-frequency tuned receptors have approximately twice as many varicosities, which are sites of transmitter release, per receptor as either anatomical type of cricket-song-tuned receptor. This may compensate in part for the numerical under-representation of these receptor types.

  15. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life.

  16. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  17. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  18. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  19. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  20. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  1. Diffusion tensor imaging and MR morphometry of the central auditory pathway and auditory cortex in aging.

    Science.gov (United States)

    Profant, O; Škoch, A; Balogová, Z; Tintěra, J; Hlinka, J; Syka, J

    2014-02-28

    Age-related hearing loss (presbycusis) is caused mainly by the hypofunction of the inner ear, but recent findings point also toward a central component of presbycusis. We used MR morphometry and diffusion tensor imaging (DTI) with a 3T MR system with the aim to study the state of the central auditory system in a group of elderly subjects (>65years) with mild presbycusis, in a group of elderly subjects with expressed presbycusis and in young controls. Cortical reconstruction, volumetric segmentation and auditory pathway tractography were performed. Three parameters were evaluated by morphometry: the volume of the gray matter, the surface area of the gyrus and the thickness of the cortex. In all experimental groups the surface area and gray matter volume were larger on the left side in Heschl's gyrus and planum temporale and slightly larger in the gyrus frontalis superior, whereas they were larger on the right side in the primary visual cortex. Almost all of the measured parameters were significantly smaller in the elderly subjects in Heschl's gyrus, planum temporale and gyrus frontalis superior. Aging did not change the side asymmetry (laterality) of the gyri. In the central part of the auditory pathway above the inferior colliculus, a trend toward an effect of aging was present in the axial vector of the diffusion (L1) variable of DTI, with increased values observed in elderly subjects. A trend toward a decrease of L1 on the left side, which was more pronounced in the elderly groups, was observed. The effect of hearing loss was present in subjects with expressed presbycusis as a trend toward an increase of the radial vectors (L2L3) in the white matter under Heschl's gyrus. These results suggest that in addition to peripheral changes, changes in the central part of the auditory system in elderly subjects are also present; however, the extent of hearing loss does not play a significant role in the central changes.

  2. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  3. Differential Expression of Phosphorylated Mitogen-Activated Protein Kinase (pMAPK) in the Lateral Amygdala of Mice Selectively Bred for High and Low Fear

    Science.gov (United States)

    2013-07-02

    stimulus and a nociceptive unconditioned foot shock stimulus converge in the lateral amygdala (LA) via auditory thalamus and cortex and somatosensory...shows how an auditory conditioned stimulus and a nociceptive unconditioned foot shock stimulus converge in the lateral amygdala (LA) via auditory...the US is noxious or mildly painful . Generally, in vertebrates, the US can be as simple as a puff of air into the face or a brief electric shock

  4. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

  5. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.

  6. Prediction of auditory and visual p300 brain-computer interface aptitude.

    Directory of Open Access Journals (Sweden)

    Sebastian Halder

    Full Text Available OBJECTIVE: Brain-computer interfaces (BCIs provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. METHODS: Forty healthy participants performed an electroencephalography (EEG based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. RESULTS: Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. CONCLUSIONS: Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. SIGNIFICANCE: Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population.

  7. Developmental evaluation of atypical auditory sampling in dyslexia: Functional and structural evidence.

    Science.gov (United States)

    Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel

    2015-12-01

    Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations.

  8. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    Science.gov (United States)

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  9. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  10. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  11. Probability and Surprisal in Auditory Comprehension of Morphologically Complex Words

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Baayen, R. Harald

    2012-01-01

    Two auditory lexical decision experiments document for morphologically complex words two points at which the probability of a target word given the evidence shifts dramatically. The first point is reached when morphologically unrelated competitors are no longer compatible with the evidence....... Adapting terminology from Marslen-Wilson (1984), we refer to this as the word’s initial uniqueness point (UP1). The second point is the complex uniqueness point (CUP) introduced by Balling and Baayen (2008), at which morphologically related competitors become incompatible with the input. Later initial...... in the course of the word co-determines response latencies. The presence of effects of surprisal, both at the initial uniqueness point of complex words, and cumulatively throughout the word, challenges the Shortlist B model of Norris and McQueen (2008), and suggests that a Bayesian approach to auditory...

  12. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  13. The impact of severity of hypertension on auditory brainstem responses

    Directory of Open Access Journals (Sweden)

    Gurdev Lal Goyal

    2014-07-01

    Full Text Available Background: Auditory brainstem response is an objective electrophysiological method for assessing the auditory pathways from the auditory nerve to the brainstem. The aim of this study was to correlate and to assess the degree of involvement of peripheral and central regions of brainstem auditory pathways with increasing severity of hypertension, among the patients of essential hypertension. Method: This study was conducted on 50 healthy age and sex matched controls (Group I and 50 hypertensive patients (Group II. Later group was further sub-divided into - Group IIa (Grade 1 hypertension, Group IIb (Grade 2 hypertension, and Group IIc (Grade 3 hypertension, as per WHO guidelines. These responses/potentials were recorded by using electroencephalogram electrodes on a root-mean-square electromyography, EP MARC II (PC-based machine and data were statistically compared between the various groups by way of one-way ANOVA. The parameters used for analysis were the absolute latencies of Waves I through V, interpeak latencies (IPLs and amplitude ratio of Wave V/I. Result: The absolute latency of Wave I was observed to be significantly increased in Group IIa and IIb hypertensives, while Wave V absolute latency was highly significantly prolonged among Group IIb and IIc, as compared to that of normal control group. All the hypertensives, that is, Group IIa, IIb, and IIc patients were found to have highly significant prolonged III-V IPL as compared to that of normal healthy controls. Further, intergroup comparison among hypertensive patients revealed a significant prolongation of Wave V absolute latency and III-V IPL in Group IIb and IIc patients as compared to Group IIa patients. These findings suggest a sensory deficit along with synaptic delays, across the auditory pathways in all the hypertensives, the deficit being more markedly affecting the auditory processing time at pons to midbrain (IPL III-V region of auditory pathways among Grade 2 and 3

  14. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  15. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  16. The Comparative and Developmental Study of Auditory Information Processing in Autistic Adults.

    Science.gov (United States)

    Nakamura, Kenryu; And Others

    1986-01-01

    The study examined brain functions related to information processing in autistic adults using auditory evoked potentials (AEP) and missing stimulus potentials (MSP). Both nonautistic and autistic adults showed normal mature patterns and lateralities in AEP for music stimuli, but nonautistic children did not. Autistic adults showed matured patterns…

  17. Sound-sensitive neurons innervate the ventro-lateral protocerebrum of the heliothine moth brain

    DEFF Research Database (Denmark)

    Pfuhl, Gerit; Zhao, Xin Cheng; Ian, Elena

    2014-01-01

    -sensitive neurons in the moth brain. During intracellular recordings from the lateral protocerebrum in the brain of three noctuid moth species, Heliothis virescens, Helicoverpa armigera and Helicoverpa assulta, we found an assembly of neurons responding to transient sound pulses of broad bandwidth. The majority...... of the auditory neurons ascended from the ventral cord and ramified densely within the anterior region of the ventro-lateral protocerebrum. The physiological and morphological characteristics of these auditory neurons were similar. We detected one additional sound-sensitive neuron, a brain interneuron with its...... soma positioned near the calyces of mushroom bodies and with numerous neuronal processes in the ventro-lateral protocerebrum. Mass-staining of ventral-cord neurons supported the assumption that the ventro-lateral region of the moth brain was the main target for the auditory projections ascending from...

  18. The Effect of Temporal Context on the Sustained Pitch Response in Human Auditory Cortex

    OpenAIRE

    Gutschalk, Alexander; Patterson, Roy D.; Scherg, Michael; Uppenkamp, Stefan; Rupp, André

    2006-01-01

    Recent neuroimaging studies have shown that activity in lateral Heschl’s gyrus covaries specifically with the strength of musical pitch. Pitch strength is important for the perceptual distinctiveness of an acoustic event, but in complex auditory scenes, the distinctiveness of an event also depends on its context. In this magnetoencephalography study, we evaluate how temporal context influences the sustained pitch response (SPR) in lateral Heschl’s gyrus. In 2 sequences of continuously alterna...

  19. Modulation of auditory brainstem responses by serotonin and specific serotonin receptors.

    Science.gov (United States)

    Papesh, Melissa A; Hurley, Laura M

    2016-02-01

    The neuromodulator serotonin is found throughout the auditory system from the cochlea to the cortex. Although effects of serotonin have been reported at the level of single neurons in many brainstem nuclei, how these effects correspond to more integrated measures of auditory processing has not been well-explored. In the present study, we aimed to characterize the effects of serotonin on far-field auditory brainstem responses (ABR) across a wide range of stimulus frequencies and intensities. Using a mouse model, we investigated the consequences of systemic serotonin depletion, as well as the selective stimulation and suppression of the 5-HT1 and 5-HT2 receptors, on ABR latency and amplitude. Stimuli included tone pips spanning four octaves presented over a forty dB range. Depletion of serotonin reduced the ABR latencies in Wave II and later waves, suggesting that serotonergic effects occur as early as the cochlear nucleus. Further, agonists and antagonists of specific serotonergic receptors had different profiles of effects on ABR latencies and amplitudes across waves and frequencies, suggestive of distinct effects of these agents on auditory processing. Finally, most serotonergic effects were more pronounced at lower ABR frequencies, suggesting larger or more directional modulation of low-frequency processing. This is the first study to describe the effects of serotonin on ABR responses across a wide range of stimulus frequencies and amplitudes, and it presents an important step in understanding how serotonergic modulation of auditory brainstem processing may contribute to modulation of auditory perception.

  20. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  1. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  2. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  3. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  4. Intracranial Electrophysiology of Auditory Selective Attention Associated with Speech Classification Tasks

    Science.gov (United States)

    Nourski, Kirill V.; Steinschneider, Mitchell; Rhone, Ariane E.; Howard III, Matthew A.

    2017-01-01

    Auditory selective attention paradigms are powerful tools for elucidating the various stages of speech processing. This study examined electrocorticographic activation during target detection tasks within and beyond auditory cortex. Subjects were nine neurosurgical patients undergoing chronic invasive monitoring for treatment of medically refractory epilepsy. Four subjects had left hemisphere electrode coverage, four had right coverage and one had bilateral coverage. Stimuli were 300 ms complex tones or monosyllabic words, each spoken by a different male or female talker. Subjects were instructed to press a button whenever they heard a target corresponding to a specific stimulus category (e.g., tones, animals, numbers). High gamma (70–150 Hz) activity was simultaneously recorded from Heschl’s gyrus (HG), superior, middle temporal and supramarginal gyri (STG, MTG, SMG), as well as prefrontal cortex (PFC). Data analysis focused on: (1) task effects (non-target words in tone detection vs. semantic categorization task); and (2) target effects (words as target vs. non-target during semantic classification). Responses within posteromedial HG (auditory core cortex) were minimally modulated by task and target. Non-core auditory cortex (anterolateral HG and lateral STG) exhibited sensitivity to task, with a smaller proportion of sites showing target effects. Auditory-related areas (MTG and SMG) and PFC showed both target and, to a lesser extent, task effects, that occurred later than those in the auditory cortex. Significant task and target effects were more prominent in the left hemisphere than in the right. Findings demonstrate a hierarchical organization of speech processing during auditory selective attention. PMID:28119593

  5. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  6. Music training alters the course of adolescent auditory development.

    Science.gov (United States)

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  7. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  8. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  9. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  10. Sound lateralization in subjects with callosotomy, callosal agenesis, or hemispherectomy.

    Science.gov (United States)

    Hausmann, Markus; Corballis, Michael C; Fabri, Mara; Paggi, Aldo; Lewald, Jörg

    2005-10-01

    The question of whether there is a right-hemisphere dominance in the processing of auditory spatial information in human cortex as well as the role of the corpus callosum in spatial hearing functions is still a matter of debate. Here, we approached this issue by investigating two late-callosotomized subjects and one subject with agenesis of the corpus callosum, using a task of sound lateralization with variable interaural time differences. For comparison, three subjects with left or right hemispherectomy were also tested by employing identical methods. Besides a significant reduction in their acuity, subjects with total or partial section of the corpus callosum exhibited a considerable leftward bias of sound lateralization compared to normal controls. No such bias was found in the subject with callosal agenesis, but merely a marginal reduction of general acuity. Also, one subject with complete resection of the left cerebral cortex showed virtually normal performance, whereas another subject with left hemispherectomy and one subject with right hemispherectomy exhibited severe deficits, with almost total loss of sound-lateralization ability. The results obtained in subjects with callosotomy indicate that the integrity of the corpus callosum is not indispensable for preservation of sound-lateralization ability. On the other hand, transcallosal interhemispheric transfer of auditory information obviously plays a significant role in spatial hearing functions that depend on binaural cues. Moreover, these data are compatible with the general view of a dominance of the right cortical hemisphere in auditory space perception.

  11. Auditory responses and stimulus-specific adaptation in rat auditory cortex are preserved across NREM and REM sleep.

    Science.gov (United States)

    Nir, Yuval; Vyazovskiy, Vladyslav V; Cirelli, Chiara; Banks, Matthew I; Tononi, Giulio

    2015-05-01

    Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic "gate," which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13-20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas.

  12. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  13. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  14. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  15. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  16. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  17. Expectation and attention in hierarchical auditory prediction.

    Science.gov (United States)

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M; Bekinschtein, Tristan A

    2013-07-03

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

  18. Auditory Rehabilitation in Rhesus Macaque Monkeys (Macaca mulatta) with Auditory Brainstem Implants

    Institute of Scientific and Technical Information of China (English)

    Zhen-Min Wang; Zhi-Jun Yang; Fu Zhao; Bo Wang; Xing-Chao Wang; Pei-Ran Qu; Pi-Nan Liu

    2015-01-01

    Background:The auditory brainstem implants (ABIs) have been used to treat deafness for patients with neurofibromatosis Type 2 and nontumor patients.The lack of an appropriate animal model has limited the study of improving hearing rehabilitation by the device.This study aimed to establish an animal model of ABI in adult rhesus macaque monkey (Macaca mulatta).Methods:Six adult rhesus macaque monkeys (M.mulatta) were included.Under general anesthesia,a multichannel ABI was implanted into the lateral recess of the fourth ventricle through the modified suboccipital-retrosigmoid (RS) approach.The electrical auditory brainstem response (EABR) waves were tested to ensure the optimal implant site.After the operation,the EABR and computed tomography (CT) were used to test and verify the effectiveness via electrophysiology and anatomy,respectively.The subjects underwent behavioral observation for 6 months,and the postoperative EABR was tested every two weeks from the 1st month after implant surgery.Result:The implant surgery lasted an average of 5.2 h,and no monkey died or sacrificed.The averaged latencies of peaks Ⅰ,Ⅱ and Ⅳ were 1.27,2.34 and 3.98 ms,respectively in the ABR.One-peak EABR wave was elicited in the operation,and one-or two-peak waves were elicited during the postoperative period.The EABR wave latencies appeared to be constant under different stimulus intensities;however,the amplitudes increased as the stimulus increased within a certain scope.Conclusions:It is feasible and safe to implant ABIs in rhesus macaque monkeys (M.mulatta) through a modified suboccipital RS approach,and EABR and CT are valid tools for animal model establishment.In addition,this model should be an appropriate animal model for the electrophysiological and behavioral study of rhesus macaque monkey with ABI.

  19. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  20. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  1. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  2. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  3. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  4. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  5. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  6. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  7. Neural correlates of short-term memory in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    James eBigelow

    2014-08-01

    Full Text Available Behaviorally-relevant sounds such as conspecific vocalizations are often available for only a brief amount of time; thus, goal-directed behavior frequently depends on auditory short-term memory (STM. Despite its ecological significance, the neural processes underlying auditory STM remain poorly understood. To investigate the role of the auditory cortex in STM, single- and multi-unit activity was recorded from the primary auditory cortex (A1 of two monkeys performing an auditory STM task using simple and complex sounds. Each trial consisted of a sample and test stimulus separated by a 5-s retention interval. A brief wait period followed the test stimulus, after which subjects pressed a button if the sounds were identical (match trials or withheld button presses if they were different (nonmatch trials. A number of units exhibited significant changes in firing rate for portions of the retention interval, although these changes were rarely sustained. Instead, they were most frequently observed during the early and late portions of the retention interval, with inhibition being observed more frequently than excitation. At the population level, responses elicited on match trials were briefly suppressed early in the sound period relative to nonmatch trials. However, during the latter portion of the sound, firing rates increased significantly for match trials and remained elevated throughout the wait period. Related patterns of activity were observed in prior experiments from our lab in the dorsal temporal pole (dTP and prefrontal cortex (PFC of the same animals. The data suggest that early match suppression occurs in both A1 and the dTP, whereas later match enhancement occurs first in the PFC, followed by A1 and later in dTP. Because match enhancement occurs first in the PFC, we speculate that enhancement observed in A1 and dTP may reflect top-down feedback. Overall, our findings suggest that A1 forms part of the larger neural system recruited during

  8. Processing of location and pattern changes of natural sounds in the human auditory cortex.

    Science.gov (United States)

    Altmann, Christian F; Bledowski, Christoph; Wibral, Michael; Kaiser, Jochen

    2007-04-15

    Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.

  9. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  10. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Directory of Open Access Journals (Sweden)

    Maria Di Bonito

    Full Text Available Rhombomeres (r contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN, and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  11. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Science.gov (United States)

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  12. Intermodal auditory, visual, and tactile attention modulates early stages of neural processing.

    Science.gov (United States)

    Karns, Christina M; Knight, Robert T

    2009-04-01

    We used event-related potentials (ERPs) and gamma band oscillatory responses (GBRs) to examine whether intermodal attention operates early in the auditory, visual, and tactile modalities. To control for the effects of spatial attention, we spatially coregistered all stimuli and varied the attended modality across counterbalanced blocks in an intermodal selection task. In each block, participants selectively responded to either auditory, visual, or vibrotactile stimuli from the stream of intermodal events. Auditory and visual ERPs were modulated at the latencies of early cortical processing, but attention manifested later for tactile ERPs. For ERPs, auditory processing was modulated at the latency of the Na (29 msec), which indexes early cortical or thalamocortical processing and the subsequent P1 (90 msec) ERP components. Visual processing was modulated at the latency of the early phase of the C1 (62-72 msec) thought to be generated in the primary visual cortex and the subsequent P1 and N1 (176 msec). Tactile processing was modulated at the latency of the N160 (165 msec) likely generated in the secondary association cortex. Intermodal attention enhanced early sensory GBRs for all three modalities: auditory (onset 57 msec), visual (onset 47 msec), and tactile (onset 27 msec). Together, these results suggest that intermodal attention enhances neural processing relatively early in the sensory stream independent from differential effects of spatial and intramodal selective attention.

  13. Cross-modal training induces changes in spatial representations early in the auditory processing pathway.

    Science.gov (United States)

    Bruns, Patrick; Liebnau, Ronja; Röder, Brigitte

    2011-09-01

    In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.

  14. Auditory cortical and hippocampal-system mismatch responses to duration deviants in urethane-anesthetized rats.

    Directory of Open Access Journals (Sweden)

    Timo Ruusuvirta

    Full Text Available Any change in the invariant aspects of the auditory environment is of potential importance. The human brain preattentively or automatically detects such changes. The mismatch negativity (MMN of event-related potentials (ERPs reflects this initial stage of auditory change detection. The origin of MMN is held to be cortical. The hippocampus is associated with a later generated P3a of ERPs reflecting involuntarily attention switches towards auditory changes that are high in magnitude. The evidence for this cortico-hippocampal dichotomy is scarce, however. To shed further light on this issue, auditory cortical and hippocampal-system (CA1, dentate gyrus, subiculum local-field potentials were recorded in urethane-anesthetized rats. A rare tone in duration (deviant was interspersed with a repeated tone (standard. Two standard-to-standard (SSI and standard-to-deviant (SDI intervals (200 ms vs. 500 ms were applied in different combinations to vary the observability of responses resembling MMN (mismatch responses. Mismatch responses were observed at 51.5-89 ms with the 500-ms SSI coupled with the 200-ms SDI but not with the three remaining combinations. Most importantly, the responses appeared in both the auditory-cortical and hippocampal locations. The findings suggest that the hippocampus may play a role in (cortical manifestation of MMN.

  15. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  16. Tennis Elbow (Lateral Epicondylitis)

    Science.gov (United States)

    .org Tennis Elbow (Lateral Epicondylitis) Page ( 1 ) Tennis elbow, or lateral epicondyliti s, is a painful condition of the elbow caused by overuse. Not surprisingly, playing tennis or other racquet sports can cause ...

  17. Primary Lateral Sclerosis

    Science.gov (United States)

    ... mistaken for amyotrophic lateral sclerosis (ALS) or spastic paraplegia. Most neurologists follow an affected individual's clinical course ... mistaken for amyotrophic lateral sclerosis (ALS) or spastic paraplegia. Most neurologists follow an affected individual's clinical course ...

  18. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  19. Anatomical substrates of visual and auditory miniature second-language learning.

    Science.gov (United States)

    Newman-Norlund, Roger D; Frey, Scott H; Petitto, Laura-Ann; Grafton, Scott T

    2006-12-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca's area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca's area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the "critical period." The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance.

  20. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  1. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  2. Use of auditory learning to manage listening problems in children

    OpenAIRE

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...

  3. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  4. Evaluation of Evidence for Altered Behavior and Auditory Deficits in Fishes Due to Human-Generated Noise Sources

    Science.gov (United States)

    2006-04-01

    Rutilus rutilus). Some of the roach were exposed to cobalt , which reversibly blocks the responsiveness of lateral line receptors (Karlsen and Sand...cartilaginous fishes, such as pelagic and benthic sharks, skates, and rays, since their auditory systems have potentially important variations in

  5. Auditory-visual spatial interaction and modularity

    Science.gov (United States)

    Radeau, M

    1994-02-01

    The results of dealing with the conditions for pairing visual and auditory data coming from spatially separate locations argue for cognitive impenetrability and computational autonomy, the pairing rules being the Gestalt principles of common fate and proximity. Other data provide evidence for pairing with several properties of modular functioning. Arguments for domain specificity are inferred from comparison with audio-visual speech. Suggestion of innate specification can be found in developmental data indicating that the grouping of visual and auditory signals is supported very early in life by the same principles that operate in adults. Support for a specific neural architecture comes from neurophysiological studies of the bimodal (auditory-visual) neurons of the cat superior colliculus. Auditory-visual pairing thus seems to present the four main properties of the Fodorian module.

  6. [Approaches to therapy of auditory agnosia].

    Science.gov (United States)

    Fechtelpeter, A; Göddenhenrich, S; Huber, W; Springer, L

    1990-01-01

    In a 41-year-old stroke patient with bitemporal brain damage, we found severe signs of auditory agnosia 6 months after onset. Recognition of environmental sounds was extremely impaired when tested in a multiple choice sound-picture matching task, whereas auditory discrimination between sounds and picture identifications by written names was almost undisturbed. In a therapy experiment, we tried to enhance sound recognition via semantic categorization and association, imitation of sound and analysis of auditory features, respectively. The stimulation of conscious auditory analysis proved to be increasingly effective over a 4-week period of therapy. We were able to show that the patient's improvement was not only a simple effect of practicing, but it was stable and carried over to nontrained items.

  7. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  8. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  9. A critical period for auditory thalamocortical connectivity

    DEFF Research Database (Denmark)

    Rinaldi Barkat, Tania; Polley, Daniel B; Hensch, Takao K

    2011-01-01

    connectivity by in vivo recordings and day-by-day voltage-sensitive dye imaging in an acute brain slice preparation. Passive tone-rearing modified response strength and topography in mouse primary auditory cortex (A1) during a brief, 3-d window, but did not alter tonotopic maps in the thalamus. Gene...... locus of change for the tonotopic plasticity. The evolving postnatal connectivity between thalamus and cortex in the days following hearing onset may therefore determine a critical period for auditory processing....

  10. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    Science.gov (United States)

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  11. Auditory and visual reaction time and peripheral field of vision in helmet users

    Directory of Open Access Journals (Sweden)

    Abbupillai Adhilakshmi

    2016-12-01

    Full Text Available Background: The incidence of fatal accidents are more in two wheeler drivers compared to four wheeler drivers. Head injury is of serious concern when recovery and prognosis of the patients are warranted, helmets are being used for safety purposes by moped, scooters and motorcycle drivers. Although, helmets are designed with cushioning effect to prevent head injuries but there are evidences of increase risk of neck injuries and reduced peripheral vision and hearing in helmet users. A complete full coverage helmets provide about less than 3 percent restrictions in horizontal peripheral visual field compared to rider without helmet. The standard company patented ergonomically designed helmets which does not affect the peripheral vision neither auditory reaction time. Objective: This pilot study aimed to evaluate the peripheral field of vision and auditory and visual reaction time in a hypertensive, diabetic and healthy male and female in order to have a better insight of protective characteristics of helmet in health and disease. Method: This pilot study carried out on age matched male of one healthy, one hypertensive and one diabetic and female subject of one healthy, one hypertensive and one diabetics. The field of vision was assessed by Lister’s perimeter whereas auditory and visual reaction time was recorded with response analyser. Result : Gender difference was not noted in peripheral field of vision but mild difference was found in auditory reaction time for high frequency and visual reaction time for both red and green colour in healthy control. But lateral and downward peripheral visual field was found reduced whereas auditory and visual reaction time was found increased in both hypertensive and diabetic subject in both sexes. Conclusion: Peripheral vision, auditory reaction time and visual reaction time in hypertensive and diabetics may lead to vulnerable accident. Helmet use has proven to reduce extent of injury in motorcyclist and

  12. Morphology and physiology of auditory and vibratory ascending interneurones in bushcrickets.

    Science.gov (United States)

    Nebeling, B

    2000-02-15

    Auditory/vibratory interneurones of the bushcricket species Decticus albifrons and Decticus verrucivorus were studied with intracellular dye injection and electrophysiology. The morphologies of five physiologically characterised auditory/vibratory interneurones are shown in the brain, subesophageal and prothoracic ganglia. Based on their physiology, these five interneurones fall into three groups, the purely auditory or sound neurones: S-neurones, the purely vibratory V-neurones, and the bimodal vibrosensitive VS-neurones. The S1-neurones respond phasically to airborne sound whereas the S4-neurones exhibit a tonic spike pattern. Their somata are located in the prothoracic ganglion and they show an ascending axon with dendrites located in the prothoracic, subesophageal ganglia, and the brain. The VS3-neurone, responding to both auditory and vibratory stimuli in a tonic manner, has its axon traversing the brain, the suboesophageal ganglion and the prothoracic ganglion although with dendrites only in the brain. The V1- and V2-neurones respond to vibratory stimulation of the fore- and midlegs with a tonic discharge pattern, and our data show that they receive inhibitory input suppressing their spontaneous activity. Their axon transverses the prothoracic ganglion, subesophageal ganglion and terminate in the brain with dendritic branching. Thus the auditory S-neurones have dendritic arborizations in all three ganglia (prothoracic, subesophageal, and brain) compared to the vibratory (V) and vibrosensitive (VS) neurones, which have dendrites almost only in the brain. The dendrites of the S-neurones are also more extensive than those of the V-, VS-neurones. V- and VS-neurones terminate more laterally in the brain. Due to an interspecific comparison of the identified auditory interneurones the S1-neurone is found to be homologous to the TN1 of crickets and other bushcrickets, and the S4-neurone also can be called AN2. J. Exp. Zool. 286:219-230, 2000.

  13. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  14. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  15. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  16. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  17. Auditory dysfunction associated with solvent exposure

    Directory of Open Access Journals (Sweden)

    Fuente Adrian

    2013-01-01

    Full Text Available Abstract Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL. The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA, transient evoked otoacoustic emissions (TEOAE, Random Gap Detection (RGD and Hearing-in-Noise test (HINT. Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed.

  18. Long Latency Auditory Evoked Potentials during Meditation.

    Science.gov (United States)

    Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya

    2015-10-01

    The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex.

  19. Auditory cortex responses to clicks and sensory modulation difficulties in children with autism spectrum disorders (ASD.

    Directory of Open Access Journals (Sweden)

    Elena V Orekhova

    Full Text Available Auditory sensory modulation difficulties are common in autism spectrum disorders (ASD and may stem from a faulty arousal system that compromises the ability to regulate an optimal response. To study neurophysiological correlates of the sensory modulation difficulties, we recorded magnetic field responses to clicks in 14 ASD and 15 typically developing (TD children. We further analyzed the P100m, which is the most prominent component of the auditory magnetic field response in children and may reflect preattentive arousal processes. The P100m was rightward lateralized in the TD, but not in the ASD children, who showed a tendency toward P100m reduction in the right hemisphere (RH. The atypical P100m lateralization in the ASD subjects was associated with greater severity of sensory abnormalities assessed by Short Sensory Profile, as well as with auditory hypersensitivity during the first two years of life. The absence of right-hemispheric predominance of the P100m and a tendency for its right-hemispheric reduction in the ASD children suggests disturbance of the RH ascending reticular brainstem pathways and/or their thalamic and cortical projections, which in turn may contribute to abnormal arousal and attention. The correlation of sensory abnormalities with atypical, more leftward, P100m lateralization suggests that reduced preattentive processing in the right hemisphere and/or its shift to the left hemisphere may contribute to abnormal sensory behavior in ASD.

  20. Predicting Future Reading Problems Based on Pre-reading Auditory Measures: A Longitudinal Study of Children with a Familial Risk of Dyslexia

    Science.gov (United States)

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan

    2017-01-01

    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits' contribution within a multiple deficit model of dyslexia. PMID:28223953

  1. Auditory function in vestibular migraine

    Directory of Open Access Journals (Sweden)

    John Mathew

    2016-01-01

    Full Text Available Introduction: Vestibular migraine (VM is a vestibular syndrome seen in patients with migraine and is characterized by short spells of spontaneous or positional vertigo which lasts between a few seconds to weeks. Migraine and VM are considered to be a result of chemical abnormalities in the serotonin pathway. Neuhauser′s diagnostic criteria for vestibular migraine is widely accepted. Research on VM is still limited and there are few studies which have been published on this topic. Materials and Methods: This study has two parts. In the first part, we did a retrospective chart review of eighty consecutive patients who were diagnosed with vestibular migraine and determined the frequency of auditory dysfunction in these patients. The second part was a prospective case control study in which we compared the audiological parameters of thirty patients diagnosed with VM with thirty normal controls to look for any significant differences. Results: The frequency of vestibular migraine in our population is 22%. The frequency of hearing loss in VM is 33%. Conclusion: There is a significant difference between cases and controls with regards to the presence of distortion product otoacoustic emissions in both ears. This finding suggests that the hearing loss in VM is cochlear in origin.

  2. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  3. Location of the auditory cortex in the Mongolian gerbil as determined by click stimulation.

    Science.gov (United States)

    Gillette, R G

    1978-07-01

    An investigation was made of the auditory projection area in the cerebral cortex of the Mongolian gerbil (Meriones unguiculatus) using clicks at a standard intensity to map the cerebral hemisphere by the evoked potential method. The major results can be summarized as follows: (1) As is typical for other mammals, click-evoked responses characterizing the gerbil auditory area were initially surface-positive potentials (amplitudes ranging between 0.1 and 1.7 mV) with peak latencies ranging between 13 and 32 msec. (2) Only one click-responsive field was found in the temporal area. However, the data suggest that this area may actually represent two separate projections to the cortex, since a small subarea characterized by longer response latencies was located posteriorally and laterally within the click field in the majority of animals investigated. (3) The size (5 mm long by 4 mm wide) and location (temporal neocortex below the middle cerebral artery) of the gerbil auditory cortex are consistent with mapping results obtained in other rodent species. (4) The validity of the surface maps was confirmed in four cases by demonstrating that the evoked response reversed polarity between the cortical surface and underlying white matter. The reversal was demonstrated by recording with a penetrating microelectrode at representative points "bordering" the auditory projection area.

  4. Auditory presentation at test does not diminish the production effect in recognition.

    Science.gov (United States)

    Forrin, Noah D; MacLeod, Colin M

    2016-06-01

    Three experiments investigated whether auditory information at test would undermine the relational distinctiveness of vocal production at study, diminishing the production effect. In Experiment 1, with visual presentation during study, the production effect was equivalently large regardless of whether participants read each test word out loud prior to making their recognition decision. In Experiment 2, incorporating auditory presentation during study, the production effect was unaltered by whether recognition test words were presented visually or auditorily. In Experiment 3, the authors manipulated whether presentation was visual or auditory both at study and at test. Once again, presentation modality at test did not affect the size of the production effect, although the effect was significantly smaller when words were presented auditorily at study. These experiments demonstrate that production at the time of study stands out as distinct above and beyond auditory information. Moreover, this distinct aloud information need not "stand out" against a background of silent unstudied words on a recognition test. Consistent with the distinctiveness account, encoding via production enhances later recognition consistently, regardless of study or test modality. (PsycINFO Database Record

  5. Representation of particle motion in the auditory midbrain of a developing anuran.

    Science.gov (United States)

    Simmons, Andrea Megela

    2015-07-01

    In bullfrog tadpoles, a "deaf period" of lessened responsiveness to the pressure component of sounds, evident during the end of the late larval period, has been identified in the auditory midbrain. But coding of underwater particle motion in the vestibular medulla remains stable over all of larval development, with no evidence of a "deaf period." Neural coding of particle motion in the auditory midbrain was assessed to determine if a "deaf period" for this mode of stimulation exists in this brain area in spite of its absence from the vestibular medulla. Recording sites throughout the developing laminar and medial principal nuclei show relatively stable thresholds to z-axis particle motion, up until the "deaf period." Thresholds then begin to increase from this point up through the rest of metamorphic climax, and significantly fewer responsive sites can be located. The representation of particle motion in the auditory midbrain is less robust during later compared to earlier larval stages, overlapping with but also extending beyond the restricted "deaf period" for pressure stimulation. The decreased functional representation of particle motion in the auditory midbrain throughout metamorphic climax may reflect ongoing neural reorganization required to mediate the transition from underwater to amphibious life.

  6. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    Science.gov (United States)

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing.

  7. Analogues of simple and complex cells in rhesus monkey auditory cortex.

    Science.gov (United States)

    Tian, Biao; Kuśmierek, Paweł; Rauschecker, Josef P

    2013-05-01

    Receptive fields (RFs) of neurons in primary visual cortex have traditionally been subdivided into two major classes: "simple" and "complex" cells. Simple cells were originally defined by the existence of segregated subregions within their RF that respond to either the on- or offset of a light bar and by spatial summation within each of these regions, whereas complex cells had ON and OFF regions that were coextensive in space [Hubel DH, et al. (1962) J Physiol 160:106-154]. Although other definitions based on the linearity of response modulation have been proposed later [Movshon JA, et al. (1978) J Physiol 283:53-77; Skottun BC, et al. (1991) Vision Res 31(7-8):1079-1086], the segregation of ON and OFF subregions has remained an important criterion for the distinction between simple and complex cells. Here we report that response profiles of neurons in primary auditory cortex of monkeys show a similar distinction: one group of cells has segregated ON and OFF subregions in frequency space; and another group shows ON and OFF responses within largely overlapping response profiles. This observation is intriguing for two reasons: (i) spectrotemporal dissociation in the auditory domain provides a basic neural mechanism for the segregation of sounds, a fundamental prerequisite for auditory figure-ground discrimination; and (ii) the existence of similar types of RF organization in visual and auditory cortex would support the existence of a common canonical processing algorithm within cortical columns.

  8. Current status of auditory aging and anti-aging research.

    Science.gov (United States)

    Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei

    2014-01-01

    The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions.

  9. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  10. Glycinergic Pathways of the Central Auditory System and Adjacent Reticular Formation of the Rat.

    Science.gov (United States)

    Hunter, Chyren

    The development of techniques to visualize and identify specific transmitters of neuronal circuits has stimulated work on the characterization of pathways in the rat central nervous system that utilize the inhibitory amino acid glycine as its neurotransmitter. Glycine is a major inhibitory transmitter in the spinal cord and brainstem of vertebrates where it satisfies the major criteria for neurotransmitter action. Some of these characteristics are: uneven distribution in brain, high affinity reuptake mechanisms, inhibitory neurophysiological actions on certain neuronal populations, uneven receptor distribution and the specific antagonism of its actions by the convulsant alkaloid strychnine. Behaviorally, antagonism of glycinergic neurotransmission in the medullary reticular formation is linked to the development of myoclonus and seizures which may be initiated by auditory as well as other stimuli. In the present study, decreases in the concentration of glycine as well as the density of glycine receptors in the medulla with aging were found and may be responsible for the lowered threshold for strychnine seizures observed in older rats. Neuroanatomical pathways in the central auditory system and medullary and pontine reticular formation (RF) were investigated using retrograde transport of tritiated glycine to identify glycinergic pathways; immunohistochemical techniques were used to corroborate the location of glycine neurons. Within the central auditory system, retrograde transport studies using tritiated glycine demonstrated an ipsilateral glycinergic pathway linking nuclei of the ascending auditory system. This pathway has its cell bodies in the medial nucleus of the trapezoid body (MNTB) and projects to the ventrocaudal division of the ventral nucleus of the lateral lemniscus (VLL). Collaterals of this glycinergic projection terminate in the ipsilateral lateral superior olive (LSO). Other glycinergic pathways found were afferent to the VLL and have their origin

  11. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    Science.gov (United States)

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  12. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    Directory of Open Access Journals (Sweden)

    Christina M. Karns

    2015-06-01

    Full Text Available Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.

  13. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  14. Facilitated auditory detection for speech sounds.

    Science.gov (United States)

    Signoret, Carine; Gaudrain, Etienne; Tillmann, Barbara; Grimault, Nicolas; Perrin, Fabien

    2011-01-01

    If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo-words, and complex non-phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub-threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2) that was followed by a two alternative forced-choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo-words) were better detected than non-phonological stimuli (complex sounds), presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo-words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non-speech processing could not be attributed to energetic differences in the stimuli.

  15. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  16. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  17. Amyotrophic Lateral Sclerosis (ALS)

    Science.gov (United States)

    ... ALS Neurons' broken machinery piles up in ALS Esclerosis Lateral Amiotrófica Dormant viral genes may awaken to ... Dementia Information Page Multifocal Motor Neuropathy Information Page Multiple Sclerosis Information Page Muscular Dystrophy Information Page Myasthenia ...

  18. Amyotrophic lateral sclerosis (ALS)

    Science.gov (United States)

    Lou Gehrig disease; ALS; Upper and lower motor neuron disease; Motor neuron disease ... 98. Shaw PJ. Amyotrophic lateral sclerosis and other motor neuron diseases. In: Goldman L, Schafer AI, eds. Goldman's Cecil ...

  19. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  20. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  1. Auditory imagery and the poor-pitch singer.

    Science.gov (United States)

    Pfordresher, Peter Q; Halpern, Andrea R

    2013-08-01

    The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.

  2. Intradermal melanocytic nevus of the external auditory canal.

    Science.gov (United States)

    Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P

    2005-01-01

    Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.

  3. Conditions of auditory health at work: inquiry of the auditoy effect in workers exposed to the occupationl noise

    Directory of Open Access Journals (Sweden)

    Lopes, Andréa Cintra

    2009-03-01

    Full Text Available Introduction: Physiologically, the individuals exposed to the noise may develop a very common pathology; the occupational noise induced hearing loss. Objective: Research the by means of a cross-sectional study, prevalence of occupational hearing loss in workers exposed to noise pressure levels over 85 dB NPL. Method: 400 records of workers exposed to noise pressure levels above 85 db NPS, working in companies of different segments. Results: In this sample, statistically significant differences were observed between the low and high frequencies thresholds and that the work duration influenced in the worsening of high frequencies thresholds bilaterally. As for the laterality no significant differences were confirmed between the ears, as well as the absence of correlation between tinnitus and hearing loss. Conclusion: An intensive work of auditory health promotion and/or auditory loss prevention must be emphasized, especially for workers exposed to high level occupational noises, as well as the appropriate features of individual auditory protection equipment.

  4. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  5. ABR and auditory P300 findings inchildren with ADHD

    OpenAIRE

    Schochat Eliane; Scheuer Claudia Ines; Andrade Ênio Roberto de

    2002-01-01

    Auditory processing disorders (APD), also referred as central auditory processing disorders (CAPD) and attention deficit hyperactivity disorders (ADHD) have become popular diagnostic entities for school age children. It has been demonstrated a high incidence of comorbid ADHD with communication disorders and auditory processing disorder. The aim of this study was to investigate ABR and P300 auditory evoked potentials in children with ADHD, in a double-blind study. Twenty-one children, ages bet...

  6. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  7. Are auditory percepts determined by experience?

    Science.gov (United States)

    Monson, Brian B; Han, Shui'Er; Purves, Dale

    2013-01-01

    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  8. Are auditory percepts determined by experience?

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    Full Text Available Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  9. Phonetic categorization in auditory word perception.

    Science.gov (United States)

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  10. [Functional neuroimaging of auditory hallucinations in schizophrenia].

    Science.gov (United States)

    Font, M; Parellada, E; Fernández-Egea, E; Bernardo, M; Lomeña, F

    2003-01-01

    The neurobiological bases underlying the generation of auditory hallucinations, a distressing and paradigmatic symptom of schizophrenia, are still unknown in spite of in-depth phenomenological descriptions. This work aims to make a critical review of the latest published literature in recent years, focusing on functional neuroimaging studies (PET, SPECT, fMRI) of auditory hallucinations. Thus, the studies are classified according to whether they are sensory activation, trait and state. The two main hypotheses proposed to explain the phenomenon, external speech vs. subvocal or inner speech, are also explained. Finally, the latest unitary theory as well as the limitations the studies published are commented on. The need to continue investigating in this field, that is still underdeveloped, is posed in order to understand better the etiopathogenesis of auditory hallucinations in schizophrenia.

  11. The mitochondrial connection in auditory neuropathy.

    Science.gov (United States)

    Cacace, Anthony T; Pinheiro, Joaquim M B

    2011-01-01

    'Auditory neuropathy' (AN), the term used to codify a primary degeneration of the auditory nerve, can be linked directly or indirectly to mitochondrial dysfunction. These observations are based on the expression of AN in known mitochondrial-based neurological diseases (Friedreich's ataxia, Mohr-Tranebjærg syndrome), in conditions where defects in axonal transport, protein trafficking, and fusion processes perturb and/or disrupt mitochondrial dynamics (Charcot-Marie-Tooth disease, autosomal dominant optic atrophy), in a common neonatal condition known to be toxic to mitochondria (hyperbilirubinemia), and where respiratory chain deficiencies produce reductions in oxidative phosphorylation that adversely affect peripheral auditory mechanisms. This body of evidence is solidified by data derived from temporal bone and genetic studies, biochemical, molecular biologic, behavioral, electroacoustic, and electrophysiological investigations.

  12. The auditory hallucination: a phenomenological survey.

    Science.gov (United States)

    Nayani, T H; David, A S

    1996-01-01

    A comprehensive semi-structured questionnaire was administered to 100 psychotic patients who had experienced auditory hallucinations. The aim was to extend the phenomenology of the hallucination into areas of both form and content and also to guide future theoretical development. All subjects heard 'voices' talking to or about them. The location of the voice, its characteristics and the nature of address were described. Precipitants and alleviating factors plus the effect of the hallucinations on the sufferer were identified. Other hallucinatory experiences, thought insertion and insight were examined for their inter-relationships. A pattern emerged of increasing complexity of the auditory-verbal hallucination over time by a process of accretion, with the addition of more voices and extended dialogues, and more intimacy between subject and voice. Such evolution seemed to relate to the lessening of distress and improved coping. These findings should inform both neurological and cognitive accounts of the pathogenesis of auditory hallucinations in psychotic disorders.

  13. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  14. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  15. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  16. Transient auditory hallucinations in an adolescent.

    Science.gov (United States)

    Skokauskas, Norbert; Pillay, Devina; Moran, Tom; Kahn, David A

    2010-05-01

    In adolescents, hallucinations can be a transient illness or can be associated with non-psychotic psychopathology, psychosocial adversity, or a physical illness. We present the case of a 15-year-old secondary-school student who presented with a 1-month history of first onset auditory hallucinations, which had been increasing in frequency and severity, and mild paranoid ideation. Over a 10-week period, there was a gradual diminution, followed by a complete resolution, of symptoms. We discuss issues regarding the diagnosis and prognosis of auditory hallucinations in adolescents.

  17. Differential Modification of Cortical and Thalamic Projections to Cat Primary Auditory Cortex Following Early- and Late-Onset Deafness.

    Science.gov (United States)

    Chabot, Nicole; Butler, Blake E; Lomber, Stephen G

    2015-10-15

    Following sensory deprivation, primary somatosensory and visual cortices undergo crossmodal plasticity, which subserves the remaining modalities. However, controversy remains regarding the neuroplastic potential of primary auditory cortex (A1). To examine this, we identified cortical and thalamic projections to A1 in hearing cats and those with early- and late-onset deafness. Following early deafness, inputs from second auditory cortex (A2) are amplified, whereas the number originating in the dorsal zone (DZ) decreases. In addition, inputs from the dorsal medial geniculate nucleus (dMGN) increase, whereas those from the ventral division (vMGN) are reduced. In late-deaf cats, projections from the anterior auditory field (AAF) are amplified, whereas those from the DZ decrease. Additionally, in a subset of early- and late-deaf cats, area 17 and the lateral posterior nucleus (LP) of the visual thalamus project concurrently to A1. These results demonstrate that patterns of projections to A1 are modified following deafness, with statistically significant changes occurring within the auditory thalamus and some cortical areas. Moreover, we provide anatomical evidence for small-scale crossmodal changes in projections to A1 that differ between early- and late-onset deaf animals, suggesting that potential crossmodal activation of primary auditory cortex differs depending on the age of deafness onset.

  18. Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Yuko Yoshimura

    2016-01-01

    Full Text Available The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.

  19. Amplified somatosensory and visual cortical projections to a core auditory area, the anterior auditory field, following early- and late-onset deafness.

    Science.gov (United States)

    Wong, Carmen; Chabot, Nicole; Kok, Melanie A; Lomber, Stephen G

    2015-09-01

    Cross-modal reorganization following the loss of input from a sensory modality can recruit sensory-deprived cortical areas to process information from the remaining senses. Specifically, in early-deaf cats, the anterior auditory field (AAF) is unresponsive to auditory stimuli but can be activated by somatosensory and visual stimuli. Similarly, AAF neurons respond to tactile input in adult-deafened animals. To examine anatomical changes that may underlie this functional adaptation following early or late deafness, afferent projections to AAF were examined in hearing cats, and cats with early- or adult-onset deafness. Unilateral deposits of biotinylated dextran amine were made in AAF to retrogradely label cortical and thalamic afferents to AAF. In early-deaf cats, ipsilateral neuronal labeling in visual and somatosensory cortices increased by 329% and 101%, respectively. The largest increases arose from the anterior ectosylvian visual area and the anterolateral lateral suprasylvian visual area, as well as somatosensory areas S2 and S4. Consequently, labeling in auditory areas was reduced by 36%. The age of deafness onset appeared to influence afferent connectivity, with less marked differences observed in late-deaf cats. Profound changes to visual and somatosensory afferent connectivity following deafness may reflect corticocortical rewiring affording acoustically deprived AAF with cross-modal functionality.

  20. Laterally loaded masonry

    DEFF Research Database (Denmark)

    Raun Gottfredsen, F.

    In this thesis results from experiments on mortar joints and masonry as well as methods of calculation of strength and deformation of laterally loaded masonry are presented. The strength and deformation capacity of mortar joints have been determined from experiments involving a constant compressive...... stress and increasing shear. The results show a transition to pure friction as the cohesion is gradually destroyed. An interface model of a mortar joint that can take into account this aspect has been developed. Laterally loaded masonry panels have also been tested and it is found to be characteristic...

  1. Lateral Thinking of Prospective Teachers

    Science.gov (United States)

    Lawrence, A. S. Arul; Xavier, S. Amaladoss

    2013-01-01

    Edward de Bono who invented the term "lateral thinking" in 1967 is the pioneer of lateral thinking. Lateral thinking is concerned with the generation of new ideas. Liberation from old ideas and the stimulation of new ones are twin aspects of lateral thinking. Lateral thinking is a creative skills from which all people can benefit…

  2. Somatostatin and leu-enkephalin in the rat auditory brainstem during fetal and postnatal development.

    Science.gov (United States)

    Kungel, M; Friauf, E

    1995-05-01

    A transient expression of the neuropeptide somatostatin has been described in several brain areas during early ontogeny and several opioid peptides, such as leu-enkephalin, have also been found in the brain at this stage in development. It is therefore believed that somatostatin and leu-enkephalin may play a role in neural maturation. The aim of the present study was to describe the spatiotemporal pattern of somatostatin and leu-enkephalin immunoreactivity in the auditory brainstem nuclei of the developing rat and to correlate it with other developmental events. In order to achieve this goal, we applied peroxidase-antiperoxidase immunocytochemistry to rat brains between embryonic day (E) 17 and adulthood. Somatostatin immunoreactivity (SIR) was found in all nuclei of the auditory brainstem, yet it was temporally restricted in most nuclei. SIR appeared prenatally and reached maximum levels around postnatal day (P) 7, when great numbers of immunoreactive neurons were present in the ventral cochlear nucleus (VCN) and in the lateral lemniscus. At that time relatively low numbers of cells were labeled in the dorsal cochlear nucleus, the lateral superior olive (LSO), and the inferior colliculus (IC). During the same period, when somata in the VCN were somatostatin-immunoreactive (SIR), a dense network of labeled fibers was also present in the LSO, the medial superior olive (MSO), and the medial nucleus of the trapezoid body (MNTB). As these nuclei receive direct input from VCN neurons, and as the distribution and morphology of the somatostatinergic fibers in the superior olivary complex (SOC) was like that of axons from VCN neurons, these findings suggest a transient somatostatinergic connection within the auditory system. Aside from the LSO, MSO, and MNTB, labeled fibers were found to a smaller extent in all other auditory brainstem nuclei. After P7, the SIR decreased and only a few immunoreactive elements were found in the adult auditory brainstem nuclei, indicating

  3. Pre-Training Reversible Inactivation of the Basal Amygdala (BA Disrupts Contextual, but Not Auditory, Fear Conditioning, in Rats.

    Directory of Open Access Journals (Sweden)

    Elisa Mari Akagi Jordão

    Full Text Available The basolateral amygdala complex (BLA, including the lateral (LA, basal (BA and accessory basal (AB nuclei, is involved in acquisition of contextual and auditory fear conditioning. The BA is one of the main targets for hippocampal information, a brain structure critical for contextual learning, which integrates several discrete stimuli into a single configural representation. Congruent with the hodology, selective neurotoxic damage to the BA results in impairments in contextual, but not auditory, fear conditioning, similarly to the behavioral impairments found after hippocampal damage. This study evaluated the effects of muscimol-induced reversible inactivation of the BA during a simultaneous contextual and auditory fear conditioning training on later fear responses to both the context and the tone, tested separately, without muscimol administration. As compared to control rats micro-infused with vehicle, subjects micro-infused with muscimol before training exhibited, during testing without muscimol, significant reduction of freezing responses to the conditioned context, but not to the conditioned tone. Therefore, reversible inactivation of the BA during training impaired contextual, but not auditory fear conditioning, thus confirming and extending similar behavioral observations following selective neurotoxic damage to the BA and, in addition, revealing that this effect is not related to the lack of a functional BA during testing.

  4. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.

  5. Reading adn Auditory-Visual Equivalences

    Science.gov (United States)

    Sidman, Murray

    1971-01-01

    A retarded boy, unable to read orally or with comprehension, was taught to match spoken to printed words and was then capable of reading comprehension (matching printed words to picture) and oral reading (naming printed words aloud), demonstrating that certain learned auditory-visual equivalences are sufficient prerequisites for reading…

  6. Tuning up the developing auditory CNS.

    Science.gov (United States)

    Sanes, Dan H; Bao, Shaowen

    2009-04-01

    Although the auditory system has limited information processing resources, the acoustic environment is infinitely variable. To properly encode the natural environment, the developing central auditory system becomes somewhat specialized through experience-dependent adaptive mechanisms that operate during a sensitive time window. Recent studies have demonstrated that cellular and synaptic plasticity occurs throughout the central auditory pathway. Acoustic-rearing experiments can lead to an over-representation of the exposed sound frequency, and this is associated with specific changes in frequency discrimination. These forms of cellular plasticity are manifest in brain regions, such as midbrain and cortex, which interact through feed-forward and feedback pathways. Hearing loss leads to a profound re-weighting of excitatory and inhibitory synaptic gain throughout the auditory CNS, and this is associated with an over-excitability that is observed in vivo. Further behavioral and computational analyses may provide insights into how theses cellular and systems plasticity effects underlie the development of cognitive functions such as speech perception.

  7. Auditory Integration Training: The Magical Mystery Cure.

    Science.gov (United States)

    Tharpe, Anne Marie

    1999-01-01

    This article notes the enthusiastic reception received by auditory integration training (AIT) for children with a wide variety of disorders including autism but raises concerns about this alternative treatment practice. It offers reasons for cautious evaluation of AIT prior to clinical implementation and summarizes current research findings. (DB)

  8. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  9. Development of Receiver Stimulator for Auditory Prosthesis

    Directory of Open Access Journals (Sweden)

    K. Raja Kumar

    2010-05-01

    Full Text Available The Auditory Prosthesis (AP is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP consists of Speech Data Decoder, DAC, ADC, constant current generator, electrode selection logic, switch matrix and simulated electrode resistance array. The laboratory model of speech processor is designed to implement the Continuous Interleaved Sampling (CIS speech processing algorithm which generates the information required for electrode stimulation based on the speech / audio data. Speech Data Decoder receives the encoded speech data via an inductive RF transcutaneous link from speech processor. Twelve channels of auditory Prosthesis with selectable eight electrodes for stimulation of simulated electrode resistance array are used for testing. The RSAP is validated by using the test data generated by the laboratory prototype of speech processor. The experimental results are obtained from specific speech/sound tests using a high-speed data acquisition system and found satisfactory.

  10. Auditory Processing Disorder: School Psychologist Beware?

    Science.gov (United States)

    Lovett, Benjamin J.

    2011-01-01

    An increasing number of students are being diagnosed with auditory processing disorder (APD), but the school psychology literature has largely neglected this controversial condition. This article reviews research on APD, revealing substantial concerns with assessment tools and diagnostic practices, as well as insufficient research regarding many…

  11. The Goldilocks Effect in Infant Auditory Attention

    Science.gov (United States)

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  12. Auditory Training with Frequent Communication Partners

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent; Sommers, Mitchell; Barcroft, Joe

    2016-01-01

    Purpose: Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)--speech they most likely desire to recognize--under the assumption that familiarity…

  13. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  14. Affective Priming with Auditory Speech Stimuli

    Science.gov (United States)

    Degner, Juliane

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…

  15. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    Degner, J.

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  16. Auditory pathology in cri-du-chat (5p-) syndrome: phenotypic evidence for auditory neuropathy.

    Science.gov (United States)

    Swanepoel, D

    2007-10-01

    5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.

  17. Interhemispheric auditory connectivity: structure and function related to auditory verbal hallucinations.

    Science.gov (United States)

    Steinmann, Saskia; Leicht, Gregor; Mulert, Christoph

    2014-01-01

    Auditory verbal hallucinations (AVH) are one of the most common and most distressing symptoms of schizophrenia. Despite fundamental research, the underlying neurocognitive and neurobiological mechanisms are still a matter of debate. Previous studies suggested that "hearing voices" is associated with a number of factors including local deficits in the left auditory cortex and a disturbed connectivity of frontal and temporoparietal language-related areas. In addition, it is hypothesized that the interhemispheric pathways connecting right and left auditory cortices might be involved in the pathogenesis of AVH. Findings based on Diffusion-Tensor-Imaging (DTI) measurements revealed a remarkable interindividual variability in size and shape of the interhemispheric auditory pathways. Interestingly, schizophrenia patients suffering from AVH exhibited increased fractional anisotropy (FA) in the interhemispheric fibers than non-hallucinating patients. Thus, higher FA-values indicate an increased severity of AVH. Moreover, a dichotic listening (DL) task showed that the interindividual variability in the interhemispheric auditory pathways was reflected in the behavioral outcome: stronger pathways supported a better information transfer and consequently improved speech perception. This detection indicates a specific structure-function relationship, which seems to be interindividually variable. This review focuses on recent findings concerning the structure-function relationship of the interhemispheric pathways in controls, hallucinating and non-hallucinating schizophrenia patients and concludes that changes in the structural and functional connectivity of auditory areas are involved in the pathophysiology of AVH.

  18. Representation of Reward Feedback in Primate Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Michael eBrosch

    2011-02-01

    Full Text Available It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1 the reward expectancy for each trial, (2 the reward size received and (3 the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  19. Representation of reward feedback in primate auditory cortex.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  20. Exploring human brain lateralization with molecular genetics and genomics.

    Science.gov (United States)

    Francks, Clyde

    2015-11-01

    Lateralizations of brain structure and motor behavior have been observed in humans as early as the first trimester of gestation, and are likely to arise from asymmetrical genetic-developmental programs, as in other animals. Studies of gene expression levels in postmortem tissue samples, comparing the left and right sides of the human cerebral cortex, have generally not revealed striking transcriptional differences between the hemispheres. This is likely due to lateralization of gene expression being subtle and quantitative. However, a recent re-analysis and meta-analysis of gene expression data from the adult superior temporal and auditory cortex found lateralization of transcription of genes involved in synaptic transmission and neuronal electrophysiology. Meanwhile, human subcortical mid- and hindbrain structures have not been well studied in relation to lateralization of gene activity, despite being potentially important developmental origins of asymmetry. Genetic polymorphisms with small effects on adult brain and behavioral asymmetries are beginning to be identified through studies of large datasets, but the core genetic mechanisms of lateralized human brain development remain unknown. Identifying subtly lateralized genetic networks in the brain will lead to a new understanding of how neuronal circuits on the left and right are differently fine-tuned to preferentially support particular cognitive and behavioral functions.

  1. Functional Connectivity Studies Of Patients With Auditory Verbal Hallucinations

    Directory of Open Access Journals (Sweden)

    Ralph E Hoffman

    2012-01-01

    Full Text Available Functional connectivity (FC studies of brain mechanisms leading to auditory verbal hallucinations (AVHs utilizing functional magnetic resonance imaging (fMRI data are reviewed. Initial FC studies utilized fMRI data collected during performance of various tasks, which suggested frontotemporal disconnection and/or source-monitoring.disturbances. Later FC studies have utilized resting (no-task fMRI data. These studies have produced a mixed picture of disconnection and hyperconnectivity involving different pathways associated with AVHs. Results of our most recent FC study of AVHs are reviewed in detail. This study suggests that the core mechanism producing AVHs involves not a single pathway, but a more complex functional loop. Components of this loop include Wernicke’s area and its right homologue, the left inferior frontal cortex, and the putamen. It is noteworthy that the putamen appears to play a critical role in the generation of spontaneous language, and in determining whether auditory stimuli are registered consciously as percepts. Excessive functional coordination linking this region with the Wernicke’s seed region in patients with schizophrenia could therefore generate an overabundance of potentially conscious language representations. In our model, intact FC in the other two legs of corticostriatal loop (Wernicke’s with left IFG, and left IFG with putamen appeared to allow this disturbance (common to schizophrenia overall to be expressed as a conscious hallucination of speech. Recommendations for future studies are discussed, including inclusion of multiple methodologies applied to the same subjects in order to compare and contrast different mechanistic hypotheses, utilizing EEG to better parse time-course of neural synchronization leading to AVHs, and ascertaining experiential subtypes of AVHs that may reflect distinct mechanisms.

  2. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  3. Comparison of Electrophysiological Auditory Measures in Fishes.

    Science.gov (United States)

    Maruska, Karen P; Sisneros, Joseph A

    2016-01-01

    Sounds provide fishes with important information used to mediate behaviors such as predator avoidance, prey detection, and social communication. How we measure auditory capabilities in fishes, therefore, has crucial implications for interpreting how individual species use acoustic information in their natural habitat. Recent analyses have highlighted differences between behavioral and electrophysiologically determined hearing thresholds, but less is known about how physiological measures at different auditory processing levels compare within a single species. Here we provide one of the first comparisons of auditory threshold curves determined by different recording methods in a single fish species, the soniferous Hawaiian sergeant fish Abudefduf abdominalis, and review past studies on representative fish species with tuning curves determined by different methods. The Hawaiian sergeant is a colonial benthic-spawning damselfish (Pomacentridae) that produces low-frequency, low-intensity sounds associated with reproductive and agonistic behaviors. We compared saccular potentials, auditory evoked potentials (AEP), and single neuron recordings from acoustic nuclei of the hindbrain and midbrain torus semicircularis. We found that hearing thresholds were lowest at low frequencies (~75-300 Hz) for all methods, which matches the spectral components of sounds produced by this species. However, thresholds at best frequency determined via single cell recordings were ~15-25 dB lower than those measured by AEP and saccular potential techniques. While none of these physiological techniques gives us a true measure of the auditory "perceptual" abilities of a naturally behaving fish, this study highlights that different methodologies can reveal similar detectable range of frequencies for a given species, but absolute hearing sensitivity may vary considerably.

  4. Impairments of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  5. [Lateral lumbar disk herniation].

    Science.gov (United States)

    Deburge, A; Barre, E; Guigui, P

    A retrospective study of 41 lateral discal hernias observed between 1984 and 1991 were studied among the 1080 discal hernias treated during this period. CT scan, performed in all cases, distinguished several different types of hernia: foramen hernias (26), extraforamen hernias (12), mixed forms (5) associated with canal component (11). Thirteen disco scans were required. Nucleolysis was performed in 24 patients (58%) and surgical treatment was the first intention choice in 17 (41%). Outcome, evaluated with a function score developed in the unit were good in the 17 surgery cases (100%). In the nucleolysis patients results were good or excellent in 13, average in 4, and poor in 7. Five of the nucleolysis failures were later operated leading to good results in 3, average in 1 and no change in 1. Indications for surgery are more frequent in this type of discal hernia and results in our surgical series were better than those for chemonucleolysis.

  6. Treatment of lateral epicondylitis.

    Science.gov (United States)

    Johnson, Greg W; Cadwallader, Kara; Scheffel, Scot B; Epperly, Ted D

    2007-09-15

    Lateral epicondylitis is a common overuse syndrome of the extensor tendons of the forearm. It is sometimes called tennis elbow, although it can occur with many activities. The condition affects men and women equally and is more common in persons 40 years or older. Despite the prevalence of lateral epicondylitis and the numerous treatment strategies available, relatively few high-quality clinical trials support many of these treatment options; watchful waiting is a reasonable option. Topical nonsteroidal anti-inflammatory drugs, corticosteroid injections, ultrasonography, and iontophoresis with nonsteroidal anti-inflammatory drugs appear to provide short-term benefits. Use of an inelastic, nonarticular, proximal forearm strap (tennis elbow brace) may improve function during daily activities. Progressive resistance exercises may confer modest intermediate-term results. Evidence is mixed on oral nonsteroidal antiinflammatory drugs, mobilization, and acupuncture. Patients with refractory symptoms may benefit from surgical intervention. Extracorporeal shock wave therapy, laser treatment, and electromagnetic field therapy do not appear to be effective.

  7. Lateral Attitude Change.

    Science.gov (United States)

    Glaser, Tina; Dickel, Nina; Liersch, Benjamin; Rees, Jonas; Süssenbach, Philipp; Bohner, Gerd

    2015-08-01

    The authors propose a framework distinguishing two types of lateral attitude change (LAC): (a) generalization effects, where attitude change toward a focal object transfers to related objects, and (b) displacement effects, where only related attitudes change but the focal attitude does not change. They bring together examples of LAC from various domains of research, outline the conditions and underlying processes of each type of LAC, and develop a theoretical framework that enables researchers to study LAC more systematically in the future. Compared with established theories of attitude change, the LAC framework focuses on lateral instead of focal attitude change and encompasses both generalization and displacement. Novel predictions and designs for studying LAC are presented.

  8. The lateral angle revisited

    DEFF Research Database (Denmark)

    Morgan, Jeannie; Lynnerup, Niels; Hoppa, R.D.

    2013-01-01

    measurements taken from computed tomography (CT) scans. Previous reports have observed that the lateral angle size in females is significantly larger than in males. The method was applied to an independent series of 77 postmortem CT scans (42 males, 35 females) to validate its accuracy and reliability...... method appears to be of minimal practical use in forensic anthropology and archeology. © 2013 American Academy of Forensic Sciences....

  9. Lateral Control”

    DEFF Research Database (Denmark)

    Rasmussen, Hanne Nina; Veierskov, Bjarke; Hansen-Møller, Jens;

    2010-01-01

    pattern changes followed from destipitation, but few from decapitation. Growth reactions suggest that resource allocation to main branch buds inhibits leader growth in normal trees, a kind of “lateral control.” Auxin and ABA content in buds and stems was largely unaffected by treatments. Data suggest...... that subapical leader tissues beneath the apical bud group are a primary source of cytokinin regulation. Keywords ABA - Apical control - Auxin - Bud development - Cytokinin - Plant architecture...

  10. Lateral Elbow Tendinopathy

    Science.gov (United States)

    Bhabra, Gev; Wang, Allan; Ebert, Jay R.; Edwards, Peter; Zheng, Monica; Zheng, Ming H.

    2016-01-01

    Lateral elbow tendinopathy, commonly known as tennis elbow, is a condition that can cause significant functional impairment in working-age patients. The term tendinopathy is used to describe chronic overuse tendon disorders encompassing a group of pathologies, a spectrum of disease. This review details the pathophysiology of tendinopathy and tendon healing as an introduction for a system grading the severity of tendinopathy, with each of the 4 grades displaying distinct histopathological features. Currently, there are a large number of nonoperative treatments available for lateral elbow tendinopathy, with little guidance as to when and how to use them. In fact, an appraisal of the clinical trials, systematic reviews, and meta-analyses studying these treatment modalities reveals that no single treatment reliably achieves outstanding results. This may be due in part to the majority of clinical studies to date including all patients with chronic tendinopathy rather than attempting to categorize patients according to the severity of disease. We relate the pathophysiology of the different grades of tendinopathy to the basic science principles that underpin the mechanisms of action of the nonoperative treatments available to propose a treatment algorithm guiding the management of lateral elbow tendinopathy depending on severity. We believe that this system will be useful both in clinical practice and for the future investigation of the efficacy of treatments. PMID:27833925

  11. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  12. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  13. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  14. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  15. Deafness in cochlear and auditory nerve disorders.

    Science.gov (United States)

    Hopkins, Kathryn

    2015-01-01

    Sensorineural hearing loss is the most common type of hearing impairment worldwide. It arises as a consequence of damage to the cochlea or auditory nerve, and several structures are often affected simultaneously. There are many causes, including genetic mutations affecting the structures of the inner ear, and environmental insults such as noise, ototoxic substances, and hypoxia. The prevalence increases dramatically with age. Clinical diagnosis is most commonly accomplished by measuring detection thresholds and comparing these to normative values to determine the degree of hearing loss. In addition to causing insensitivity to weak sounds, sensorineural hearing loss has a number of adverse perceptual consequences, including loudness recruitment, poor perception of pitch and auditory space, and difficulty understanding speech, particularly in the presence of background noise. The condition is usually incurable; treatment focuses on restoring the audibility of sounds made inaudible by hearing loss using either hearing aids or cochlear implants.

  16. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  17. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    The auditory evoked potential (AEP) is an electrical signal that can be recorded from electrodes attached to the scalp of a human subject when a sound is presented. The signal is considered to reflect neural activity in response to the acoustic stimulation and is a well established clinical...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...... to simulate auditory brainstem responses (ABRs) evoked by classic stimuli like clicks, tone bursts and chirps. The ABRs to these simple stimuli were compared to literature data and the model was shown to predict the frequency dependence of tone-burst ABR wave-V latency and the level-dependence of ABR wave...

  18. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  19. Cognitive mechanisms associated with auditory sensory gating.

    Science.gov (United States)

    Jones, L A; Hills, P J; Dick, K M; Jones, S P; Bright, P

    2016-02-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification.

  20. The lateral line microcosmos.

    Science.gov (United States)

    Ghysen, Alain; Dambly-Chaudière, Christine

    2007-09-01

    The lateral-line system is a simple sensory system comprising a number of discrete sense organs, the neuromasts, distributed over the body of fish and amphibians in species-specific patterns. Its development involves fundamental biological processes such as long-range cell migration, planar cell polarity, regeneration, and post-embryonic remodeling. These aspects have been extensively studied in amphibians by experimental embryologists, but it is only recently that the genetic bases of this development have been explored in zebrafish. This review discusses progress made over the past few years in this field.

  1. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  2. Midbrain auditory selectivity to natural sounds.

    Science.gov (United States)

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  3. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  4. Brainstem auditory evoked response: application in neurology

    Directory of Open Access Journals (Sweden)

    Carlos A. M. Guerreiro

    1982-03-01

    Full Text Available The tecnique that we use for eliciting brainstem auditory evoked responses (BAERs is described. BAERs are a non-invasive and reliable clinical test when carefully performed. This test is indicated in the evaluation of disorders which may potentially involve the brainstem such as coma, multiple sclerosis posterior fossa tumors and others. Unsuspected lesions with normal radiologic studies (including CT-scan can be revealed by the BAER.

  5. Cognitive mechanisms associated with auditory sensory gating

    OpenAIRE

    Jones, L. A.; Hills, P.J.; Dick, K.M.; Jones, S. P.; Bright, P

    2015-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants addit...

  6. A neurophysiological deficit in early visual processing in schizophrenia patients with auditory hallucinations.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E; Kroppmann, Christopher J; Alschuler, Daniel M; Fekri, Shiva; Gil, Roberto; Jarskog, L Fredrik; Harkavy-Friedman, Jill M; Bruder, Gerard E

    2012-09-01

    Existing 67-channel event-related potentials, obtained during recognition and working memory paradigms with words or faces, were used to examine early visual processing in schizophrenia patients prone to auditory hallucinations (AH, n = 26) or not (NH, n = 49) and healthy controls (HC, n = 46). Current source density (CSD) transforms revealed distinct, strongly left- (words) or right-lateralized (faces; N170) inferior-temporal N1 sinks (150 ms) in each group. N1 was quantified by temporal PCA of peak-adjusted CSDs. For words and faces in both paradigms, N1 was substantially reduced in AH compared with NH and HC, who did not differ from each other. The difference in N1 between AH and NH was not due to overall symptom severity or performance accuracy, with both groups showing comparable memory deficits. Our findings extend prior reports of reduced auditory N1 in AH, suggesting a broader early perceptual integration deficit that is not limited to the auditory modality.

  7. Neural mechanisms of intermodal sustained selective attention with concurrently presented auditory and visual stimuli

    Directory of Open Access Journals (Sweden)

    Katja Saupe

    2009-11-01

    Full Text Available We investigated intermodal attention effects on the auditory steady-state response (ASSR and the steady-state visual evoked potential (SSVEP. For this purpose, 40 Hz amplitude modulated tones and a stream of flickering (7.5 Hz random letters were presented concurrently. By means of an auditory or visual target detection task, participants’ attention was directed to the respective modality for several seconds. Attention to the auditory stream led to a significant enhancement of the ASSR compared to when the visual stream was attended. This attentional modulation was located mainly in the right superior temporal gyrus. Vice versa, attention to the visual stream especially increased the second harmonic response of the SSVEP. This modulation was focused in the inferior occipital and lateral occipitotemporal gyrus of both hemispheres. To the best of our knowledge, this is the first demonstration of amplitude modulation of the ASSR and the SSVEP by intermodal sustained attention. Our results open a new avenue of research to understand the basic neural mechanisms of intermodal attention in the human brain.

  8. Comparison of Auditory Brainstem Response in Noise Induced Tinnitus and Non-Tinnitus Control Subjects

    Directory of Open Access Journals (Sweden)

    Ghassem Mohammadkhani

    2009-12-01

    Full Text Available Background and Aim: Tinnitus is an unpleasant sound which can cause some behavioral disorders. According to evidence the origin of tinnitus is not only in peripheral but also in central auditory system. So evaluation of central auditory system function is necessary. In this study Auditory brainstem responses (ABR were compared in noise induced tinnitus and non-tinnitus control subjects.Materials and Methods: This cross-sectional, descriptive and analytic study is conducted in 60 cases in two groups including of 30 noise induced tinnitus and 30 non-tinnitus control subjects. ABRs were recorded ipsilateraly and contralateraly and their latencies and amplitudes were analyzed.Results: Mean interpeak latencies of III-V (p= 0.022, I-V (p=0.033 in ipsilatral electrode array and mean absolute latencies of IV (p=0.015 and V (p=0.048 in contralatral electrode array were significantly increased in noise induced tinnitus group relative to control group. Conclusion: It can be concluded from that there are some decrease in neural transmission time in brainstem and there are some sign of involvement of medial nuclei in olivery complex in addition to lateral lemniscus.

  9. Repetition suppression and expectation suppression are dissociable in time in early auditory evoked fields.

    Science.gov (United States)

    Todorovic, Ana; de Lange, Floris P

    2012-09-26

    Repetition of a stimulus, as well as valid expectation that a stimulus will occur, both attenuate the neural response to it. These effects, repetition suppression and expectation suppression, are typically confounded in paradigms in which the nonrepeated stimulus is also relatively rare (e.g., in oddball blocks of mismatch negativity paradigms, or in repetition suppression paradigms with multiple repetitions before an alternation). However, recent hierarchical models of sensory processing inspire the hypothesis that the two might be separable in time, with repetition suppression occurring earlier, as a consequence of local transition probabilities, and suppression by expectation occurring later, as a consequence of learnt statistical regularities. Here we test this hypothesis in an auditory experiment by orthogonally manipulating stimulus repetition and stimulus expectation and, using magnetoencephalography, measuring the neural response over time in human subjects. We found that stimulus repetition (but not stimulus expectation) attenuates the early auditory response (40-60 ms), while stimulus expectation (but not stimulus repetition) attenuates the subsequent, intermediate stage of auditory processing (100-200 ms). These findings are well in line with hierarchical predictive coding models, which posit sequential stages of prediction error resolution, contingent on the level at which the hypothesis is generated.

  10. Origin and immunolesioning of cholinergic basal forebrain innervation of cat primary auditory cortex.

    Science.gov (United States)

    Kamke, Marc R; Brown, Mel; Irvine, Dexter R F

    2005-08-01

    Numerous studies have implicated the cholinergic basal forebrain (cBF) in the modulation of auditory cortical responses. This study aimed to accurately define the sources of cBF input to primary auditory cortex (AI) and to assess the efficacy of a cholinergic immunotoxin in cat. Three anaesthetized cats received multiple injections of horseradish-peroxidase conjugated wheatgerm-agglutin into physiologically identified AI. Following one to two days survival, tetramethylbenzidine histochemistry revealed the greatest number of retrogradely labeled cells in ipsilateral putamen, globus pallidus and internal capsule, and smaller numbers in more medial nuclei of the basal forebrain (BF). Concurrent choline acetyltransferase immunohistochemistry showed that almost 80% of the retrogradely labeled cells in BF were cholinergic, with the vast majority of these cells arising from the more lateral BF nuclei identified above. In the second part of the study, unilateral intraparenchymal injections of the cholinergic immunotoxin ME20.4-SAP were made into the putamen/globus pallidus nuclei of six cats. Immuno- and histochemistry revealed a massive reduction in the number of cholinergic cells in and around the targeted area, and a corresponding reduction in the density of cholinergic fibers in auditory cortex. These results are discussed in terms of their implications for investigations of the role of the cBF in cortical plasticity.

  11. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  12. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  13. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  14. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  15. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  16. Auditory evoked potentials in peripheral vestibular disorder individuals

    Directory of Open Access Journals (Sweden)

    Matas, Carla Gentile

    2011-07-01

    Full Text Available Introduction: The auditory and vestibular systems are located in the same peripheral receptor, however they enter the CNS and go through different ways, thus creating a number of connections and reaching a wide area of the encephalon. Despite going through different ways, some changes can impair both systems. Such tests as Auditory Evoked Potentials can help find a diagnosis when vestibular alterations are seen. Objective: describe the Auditory Evoked Potential results in individuals complaining about dizziness or vertigo with Peripheral Vestibular Disorders and in normal individuals having the same complaint. Methods: Short, middle and long latency Auditory Evoked Potentials were performed as a transversal prospective study. Conclusion: individuals complaining about dizziness or vertigo can show some changes in BAEP (Brainstem Auditory Evoked Potential, MLAEP (Medium Latency Auditory Evoked Potential and P300.

  17. Creativity in later life.

    Science.gov (United States)

    Price, K A; Tinker, A M

    2014-08-01

    The ageing population presents significant challenges for the provision of social and health services. Strategies are needed to enable older people to cope within a society ill prepared for the impacts of these demographic changes. The ability to be creative may be one such strategy. This review outlines the relevant literature and examines current public health policy related to creativity in old age with the aim of highlighting some important issues. As well as looking at the benefits and negative aspects of creative activity in later life they are considered in the context of the theory of "successful ageing". Creative activity plays an important role in the lives of older people promoting social interaction, providing cognitive stimulation and giving a sense of self-worth. Furthermore, it is shown to be useful as a tool in the multi-disciplinary treatment of health problems common in later life such as depression and dementia. There are a number of initiatives to encourage older people to participate in creative activities such as arts-based projects which may range from visual arts to dance to music to intergenerational initiatives. However, participation shows geographical variation and often the responsibility of provision falls to voluntary organisations. Overall, the literature presented suggests that creative activity could be a useful tool for individuals and society. However, further research is needed to establish the key factors which contribute to patterns of improved health and well-being, as well as to explore ways to improve access to services.

  18. Brainmining emotive lateral solutions

    Directory of Open Access Journals (Sweden)

    Theodore Scaltsas

    2016-07-01

    Full Text Available BrainMining is a theory of creative thinking that shows how we should exploit the mind’s spontaneous natural disposition to use old solutions to address new problems – our Anchoring Cognitive Bias. BrainMining develops a simple and straightforward method to transform recalcitrant problems into types of problems which we have solved before, and then apply an old type of solution to them. The transformation makes the thinking lateral by matching up disparate types of problem and solution. It emphasises the role of emotive judgements that the agent makes, when she discerns whether a change of the values or the emotions and feelings in a situation, which would expand the space of solutions available for the problem at hand, would be acceptable or appropriate in the situation. A lateral solution for an intractable problem is thus spontaneously brainmined from the agent’s old solutions, to solve a transformed version of the intractable problem, possibly involving changes in the value system or the emotional profile of the situation, which the agent judges, emotively, will be acceptable, and even appropriate in the circumstances.

  19. Lateral Lumbar Interbody Fusion

    Science.gov (United States)

    Hughes, Alexander; Girardi, Federico; Sama, Andrew; Lebl, Darren; Cammisa, Frank

    2015-01-01

    The lateral lumbar interbody fusion (LLIF) is a relatively new technique that allows the surgeon to access the intervertebral space from a direct lateral approach either anterior to or through the psoas muscle. This approach provides an alternative to anterior lumbar interbody fusion with instrumentation, posterior lumbar interbody fusion, and transforaminal lumbar interbody fusion for anterior column support. LLIF is minimally invasive, safe, better structural support from the apophyseal ring, potential for coronal plane deformity correction, and indirect decompression, which have has made this technique popular. LLIF is currently being utilized for a variety of pathologies including but not limited to adult de novo lumbar scoliosis, central and foraminal stenosis, spondylolisthesis, and adjacent segment degeneration. Although early clinical outcomes have been good, the potential for significant neurological and vascular vertebral endplate complications exists. Nevertheless, LLIF is a promising technique with the potential to more effectively treat complex adult de novo scoliosis and achieve predictable fusion while avoiding the complications of traditional anterior surgery and posterior interbody techniques. PMID:26713134

  20. Early Pavlovian conditioning impairs later Pavlovian conditioning.

    Science.gov (United States)

    Lariviere, N A; Spear, N E

    1996-11-01

    Four experiments tested the effects in the rat of very early experience with stimuli to be used later for Pavlovian conditioning. Beginning on postnatal Day 12, prior to the development of substantial detection and effective perception of visual and auditory stimuli, rats were given five daily experiences with either lights or tones and a footshock known to be an effective unconditioned stimulus at these ages. Twenty-four hours after the last of these experiences, pairings of either the light or tone and the unconditioned stimulus were given with parameters established to yield a moderate degree of conditioning in untreated preweanlings (Experiment 1). Experiment 2 determined that early experience with paired or unpaired presentations of either the light or tone and the unconditioned stimulus resulted in a failure to condition to these same lights or tones on postnatal Day 17, although nontreated pups from the same litters conditioned quite effectively. Experiment 3 determined that this early conditioning experience with either paired or unpaired presentations of the lights or tones and the unconditioned stimulus yielded impaired conditioning on postnatal Day 17 in the alternative sensory modality as well, although again nontreated siblings conditioned quite effectively. Experiment 4 replicated the results of each of Experiments 2 and 3 and determined in addition that despite the impairment in conditioning that resulted from early paired or unpaired experience with the stimuli of conditioning, early experience with the individual stimuli of conditioning-with only the CS, the US, or the context-did not result in a similar impairment in conditioning. Although the results were unexpected, they may be understood in part in terms of intersensory competition during development, and there is precedent in the literature for similar interfering effects of early learning on later learning in a variety of species.

  1. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  2. Binaural technology for e.g. rendering auditory virtual environments

    DEFF Research Database (Denmark)

    Hammershøi, Dorte

    2008-01-01

    , helped mediate the understanding that if the transfer functions could be mastered, then important dimensions of the auditory percept could also be controlled. He early understood the potential of using the HRTFs and numerical sound transmission analysis programs for rendering auditory virtual...... environments. Jens Blauert participated in many European cooperation projects exploring  this field (and others), among other the SCATIS project addressing the auditory-tactile dimensions in the absence of visual information....

  3. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  4. [Auditory guidance systems for the visually impaired people].

    Science.gov (United States)

    He, Jing; Nie, Min; Luo, Lan; Tong, Shanbao; Niu, Jinhai; Zhu, Yisheng

    2010-04-01

    Visually impaired people face many inconveniences because of the loss of vision. Therefore, scientists are trying to design various guidance systems for improving the lives of the blind. Based on sensory substitution, auditory guidance has become an interesting topic in the field of biomedical engineering. In this paper, we made a state-of-technique review of the auditory guidance system. Although there have been many technical challenges, the auditory guidance system would be a useful alternative for the visually impaired people.

  5. Auditory cortex basal activity modulates cochlear responses in chinchillas.

    Directory of Open Access Journals (Sweden)

    Alex León

    Full Text Available BACKGROUND: The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. METHODOLOGY/PRINCIPAL FINDINGS: Cochlear microphonics (CM, auditory-nerve compound action potentials (CAP and auditory cortex evoked potentials (ACEP were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments and a permanent reduction in five chinchillas (lesion experiments. We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. CONCLUSIONS/SIGNIFICANCE: These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the

  6. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  7. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  8. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    Science.gov (United States)

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  9. Effect of auditory training on the middle latency response in children with (central auditory processing disorder

    Directory of Open Access Journals (Sweden)

    E. Schochat

    2010-08-01

    Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.

  10. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2015-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  11. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2016-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  12. Extrinsic sound stimulations and development of periphery auditory synapses

    Institute of Scientific and Technical Information of China (English)

    Kun Hou; Shiming Yang; Ke Liu

    2015-01-01

    The development of auditory synapses is a key process for the maturation of hearing function. However, it is still on debate regarding whether the development of auditory synapses is dominated by acquired sound stimulations. In this review, we summarize relevant publications in recent decades to address this issue. Most reported data suggest that extrinsic sound stimulations do affect, but not govern the development of periphery auditory synapses. Overall, periphery auditory synapses develop and mature according to its intrinsic mechanism to build up the synaptic connections between sensory neurons and/or interneurons.

  13. Neurodynamics for auditory stream segregation: tracking sounds in the mustached bat's natural environment.

    Science.gov (United States)

    Kanwal, Jagmeet S; Medvedev, Andrei V; Micheyl, Christophe

    2003-08-01

    During navigation and the search phase of foraging, mustached bats emit approximately 25 ms long echolocation pulses (at 10-40 Hz) that contain multiple harmonics of a constant frequency (CF) component followed by a short (3 ms) downward frequency modulation. In the context of auditory stream segregation, therefore, bats may either perceive a coherent pulse-echo sequence (PEPE...), or segregated pulse and echo streams (P-P-P... and E-E-E...). To identify the neural mechanisms for stream segregation in bats, we developed a simple yet realistic neural network model with seven layers and 420 nodes. Our model required recurrent and lateral inhibition to enable output nodes in the network to 'latch-on' to a single tone (corresponding to a CF component in either the pulse or echo), i.e., exhibit differential suppression by the alternating two tones presented at a high rate (> 10 Hz). To test the applicability of our model to echolocation, we obtained neurophysiological data from the primary auditory cortex of awake mustached bats. Event-related potentials reliably reproduced the latching behaviour observed at output nodes in the network. Pulse as well as nontarget (clutter) echo CFs facilitated this latching. Individual single unit responses were erratic, but when summed over several recording sites, they also exhibited reliable latching behaviour even at 40 Hz. On the basis of these findings, we propose that a neural correlate of auditory stream segregation is present within localized synaptic activity in the mustached bat's auditory cortex and this mechanism may enhance the perception of echolocation sounds in the natural environment.

  14. Gradients and modulation of K(+ channels optimize temporal accuracy in networks of auditory neurons.

    Directory of Open Access Journals (Sweden)

    Leonard K Kaczmarek

    Full Text Available Accurate timing of action potentials is required for neurons in auditory brainstem nuclei to encode the frequency and phase of incoming sound stimuli. Many such neurons express "high threshold" Kv3-family channels that are required for firing at high rates (> -200 Hz. Kv3 channels are expressed in gradients along the medial-lateral tonotopic axis of the nuclei. Numerical simulations of auditory brainstem neurons were used to calculate the input-output relations of ensembles of 1-50 neurons, stimulated at rates between 100-1500 Hz. Individual neurons with different levels of potassium currents differ in their ability to follow specific rates of stimulation but all perform poorly when the stimulus rate is greater than the maximal firing rate of the neurons. The temporal accuracy of the combined synaptic output of an ensemble is, however, enhanced by the presence of gradients in Kv3 channel levels over that measured when neurons express uniform levels of channels. Surprisingly, at high rates of stimulation, temporal accuracy is also enhanced by the occurrence of random spontaneous activity, such as is normally observed in the absence of sound stimulation. For any pattern of stimulation, however, greatest accuracy is observed when, in the presence of spontaneous activity, the levels of potassium conductance in all of the neurons is adjusted to that found in the subset of neurons that respond better than their neighbors. This optimization of response by adjusting the K(+ conductance occurs for stimulus patterns containing either single and or multiple frequencies in the phase-locking range. The findings suggest that gradients of channel expression are required for normal auditory processing and that changes in levels of potassium currents across the nuclei, by mechanisms such as protein phosphorylation and rapid changes in channel synthesis, adapt the nuclei to the ongoing auditory environment.

  15. Development of kinesthetic-motor and auditory-motor representations in school-aged children.

    Science.gov (United States)

    Kagerer, Florian A; Clark, Jane E

    2015-07-01

    In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.

  16. Can bispectral index or auditory evoked potential index predict implicit memory during propofol-induced sedation?

    Institute of Scientific and Technical Information of China (English)

    WANG Yun; YUE Yun; SUN Yong-hai; WU An-shi

    2006-01-01

    Background Some patients still suffer from implicit memory of intraoperative events under adequate depth of anaesthesia. The elimination of implicit memory should be a necessary aim of clinical general anaesthesia.However, implicit memory cannot be tested during anaesthesia yet. We propose bispectral index (BIS) and auditory evoked potential index (AEPI), as predictors of implicit memory during anaesthesia.Methods Thirty-six patients were equally divided into 3 groups according to the Observer's Assessment of Alertness/Sedation Score: A, level 3; B, level 2 ;and C, level 1. Every patient was given the first auditory stimulus before sedation. Then every patient received the second auditory stimulus after the target level of sedation had been reached. BIS and AEPI were monitored before and after the second auditory stimulus presentation. Four hours later, the inclusion test and exclusion test were performed on the ward using process dissociation procedure and the scores of implicit memory estimated.Results In groups A and B but not C, implicit memory estimates were statistically greater than zero (P<0.05).The implicit memory scores in group A did not differ significantly from those in group B (P>0.05). Implicit memory scores correlated with BIS and AEPI (P<0.01). The area under ROC curve is BIS> AEPI. The 95% cutoff points of BIS and AEPI for predicting implicit memory are 47 and 28, respectively.Conclusions Implicit memory does not disappear until the depth of sedation increases to level 1 of OAA/S score. Implicit memory scores correlate well with BIS and AEPI during sedation. BIS is a better index for predicting implicit memory than AEPI during propofol induced sedation.

  17. Functional Connectivity of Left Heschl’s Gyrus in Vulnerability to Auditory Hallucinations in Schizophrenia

    Science.gov (United States)

    Shinn, Ann K.; Baker, Justin T.; Cohen, Bruce M.; Öngür, Dost

    2012-01-01

    Background Schizophrenia is a heterogeneous disorder that may consist of multiple etiologies and disease processes. Auditory hallucinations (AH), which are common and often disabling, represent a narrower and more basic dimension of psychosis than schizophrenia. Previous studies suggest that abnormal primary auditory cortex activity is associated with AH pathogenesis. We thus investigated functional connectivity, using a seed in primary auditory cortex, in schizophrenia patients with and without AH and healthy controls, to examine neural circuit abnormalities associated more specifically with AH than the myriad other symptoms that comprise schizophrenia. Methods Using resting-state fMRI (rsfMRI), we investigated functional connectivity of the primary auditory cortex, located on Heschl’s gyrus, in schizophrenia spectrum patients with AH. Participants were patients with schizophrenia, schizoaffective disorder, or schizophreniform disorder with lifetime AH (n=27); patients with the same diagnoses but no lifetime AH (n=14); and healthy controls (n=28). Results Patients with AH vulnerability showed increased left Heschl’s gyrus functional connectivity with left frontoparietal regions and decreased functional connectivity with right hippocampal formation and mediodorsal thalamus compared to patients without lifetime AH. Furthermore, among AH patients, left Heschl’s gyrus functional connectivity covaried positively with AH severity in left inferior frontal gyrus (Broca’s area), left lateral STG, right pre- and postcentral gyri, cingulate cortex, and orbitofrontal cortex. There were no differences between patients with and without lifetime AH in right Heschl’s gyrus seeded functional connectivity. Conclusions Abnormal interactions between left Heschl’s gyrus and regions involved in speech/language, memory, and the monitoring of self-generated events may contribute to AH vulnerability. PMID:23287311

  18. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Directory of Open Access Journals (Sweden)

    Marc R. Kamke

    2014-06-01

    Full Text Available The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color. In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  19. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Science.gov (United States)

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  20. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    Science.gov (United States)

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  1. Evaluation of peripheral compression and auditory nerve fiber intensity coding using auditory steady-state responses

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; M. Harte, James; Epp, Bastian

    2015-01-01

    The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and, particularly, of outer-hair cell function. Another ability of the healthy auditory system is to enable communication in acoustical environments with high-level background noises....... Evaluation of these properties provides information about the health state of the system. It has been shown that a loss of outer hair cells leads to a reduction in peripheral compression. It has also recently been shown in animal studies that noise over-exposure, producing temporary threshold shifts, can...

  2. Epicondilite lateral do cotovelo

    OpenAIRE

    Cohen,Marcio; Motta Filho,Geraldo da Rocha

    2012-01-01

    A epicondilite lateral, também conhecida como cotovelo do tenista, é uma condição comum que acomete de 1 a 3% da população. O termo epicondilite sugere inflamação, embora a análise histológica tecidual não demonstre um processo inflamatório. A estrutura acometida com mais frequência é a origem do tendão extensor radial curto do carpo e o mecanismo de lesão está associado à sua sobrecarga. O tratamento incruento é o de escolha e inclui: repouso, fisioterapia, infiltração com cortisona ou plasm...

  3. Vitiligo Lateral Lower Lip

    Directory of Open Access Journals (Sweden)

    Sahoo Antaryami

    2002-01-01

    Full Text Available Vitiligo characteristically affecting the lateral lower lip (LLL is a common presentation in South Orissa. This type of lesion has rarely been described in literature. One hundred eighteen such cases were studied during the period from October 1999 to September, 2000. LLL vitiligo constituted 16.39% of all vitiligo patients. Both sexes were affected equally. The peak age of onset was in the 2nd decade, mean duration of illness 21.46 months. Fifty six patients had unilateral lesion (38 on the left and 18 on the right. Among the 62 patients having bilateral lesions, the onset was more frequent on the left (38 than either the right (8 or both sides together (16. All the patients were right handed. Association with local factors like infection, trauma, cheilitis, FDE etc were associated in 38.98% of cases, but systemic or autoimmune diseases were not associated. Positive family history was found in 22% of cases.

  4. Lateral conduction infrared photodetector

    Science.gov (United States)

    Kim, Jin K.; Carroll, Malcolm S.

    2011-09-20

    A photodetector for detecting infrared light in a wavelength range of 3-25 .mu.m is disclosed. The photodetector has a mesa structure formed from semiconductor layers which include a type-II superlattice formed of alternating layers of InAs and In.sub.xGa.sub.1-xSb with 0.ltoreq.x.ltoreq.0.5. Impurity doped regions are formed on sidewalls of the mesa structure to provide for a lateral conduction of photo-generated carriers which can provide an increased carrier mobility and a reduced surface recombination. An optional bias electrode can be used in the photodetector to control and vary a cut-off wavelength or a depletion width therein. The photodetector can be formed as a single-color or multi-color device, and can also be used to form a focal plane array which is compatible with conventional read-out integrated circuits.

  5. Organization of the auditory brainstem in a lizard, Gekko gecko. I. Auditory nerve, cochlear nuclei, and superior olivary nuclei

    DEFF Research Database (Denmark)

    Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.

    2012-01-01

    We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...... of auditory connections in lizards and archosaurs but also different processing of low- and high-frequency information in the brainstem. J. Comp. Neurol. 520:17841799, 2012. (C) 2011 Wiley Periodicals, Inc...

  6. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  7. Diagnosis and treatment of carcinoma in external auditory canal

    Institute of Scientific and Technical Information of China (English)

    Shengjuan Zhen; Tao Fu; Jinjie Qi

    2014-01-01

    Objectives: To evaluate outcomes in treating carcinoma of external auditory canal (EAC) and to analysis factors which effect the prognosis of this disease. Methods: A retrospectively review of 16 patients treated for carcinoma of EAC at our department between April 2000 and April 2014 was conducted. All patients underwent surgical treatment and the diagnosis confirmed by pathological examination. Results: There were adenoid cystic carcinoma (ACC) in 8 patients, squamous cell carcinoma (SCC) in 5 patients, adenocarcinoma (AC) in 2 patients, and verrucous carcinoma (VC) in 1 patient. The tumors were classified as Stage I in 4 cases, Stage II in 2 cases, Stage III in 3 cases, and Stage IV in 7 cases. Five patients underwent extensive tumor resection (ETR), 2 patients underwent lateral temporal bone resection (LTBR), 5 patients underwent modified LTBR, 2 patients underwent subtotal temporal bone resection (STBR), and 2 patients underwent only open biopsy. Besides, adjunctive procedures, including neck dissection, parotidectomy and pinna resection were performed when indicated. Ten patients received postoperative radiotherapy. By the end of follow up, two patients had died of their disease, 2 lost to follow up, 2 survived with the disease, and the rest survived disease-free. The median follow-up period was 24 months. Conclusion: Complete tumor resection appears to be an effective treatment for carcinoma of the EAC. Patients with SCC seem to have worse prognosis than those with ACC. Radiation therapy seems less effective for the disease than surgical treatment.

  8. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  9. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training.

    Science.gov (United States)

    Bell, Brittany A; Phan, Mimi L; Vicario, David S

    2015-03-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions.

  10. Rapid cortical dynamics associated with auditory spatial attention gradients.

    Science.gov (United States)

    Mock, Jeffrey R; Seay, Michael J; Charney, Danielle R; Holmes, John L; Golob, Edward J

    2015-01-01

    Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.

  11. Regulation of the fear network by mediators of stress: Norepinephrine alters the balance between Cortical and Subcortical afferent excitation of the Lateral Amygdala

    Directory of Open Access Journals (Sweden)

    Luke R Johnson

    2011-05-01

    Full Text Available Pavlovian auditory fear conditioning crucially involves the integration of information about and acoustic conditioned stimulus (CS and an aversive unconditioned stimulus (US in the lateral nucleus of the amygdala (LA. The auditory CS reaches the LA subcortically via a direct connection from the auditory thalamus and also from the auditory association cortex itself. How neural modulators, especially those activated during stress, such as norepinephrine (NE, regulate synaptic transmission and plasticity in this network is poorly understood. Here we show that NE inhibits synaptic transmission in both the subcortical and cortical input pathway but that sensory processing is biased towards the subcortical pathway. In addition binding of NE to β-adrenergic receptors further dissociates sensory processing in the LA. These findings suggest a network mechanism that shifts sensory balance towards the faster but more primitive subcortical input.

  12. Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem.

    Science.gov (United States)

    Wallace, Matthew M; Harris, J Aaron; Brubaker, Donald Q; Klotz, Caitlyn A; Gabriele, Mark L

    2016-05-01

    Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph

  13. Lateral Thinking and Technology Education.

    Science.gov (United States)

    Waks, Shlomo

    1997-01-01

    Presents an analysis of technology education and its relevance to lateral thinking. Discusses prospects for utilizing technology education as a platform and a contextual domain for nurturing lateral thinking. Argues that technology education is an appropriate environment for developing complementary incorporation of vertical and lateral thinking.…

  14. Auditory excitation patterns : the significance of the pulsation threshold method for the measurement of auditory nonlinearity

    NARCIS (Netherlands)

    H. Verschuure (Hans)

    1978-01-01

    textabstractThe auditory system is the toto[ of organs that translates an acoustical signal into the perception of a sound. An acoustic signal is a vibration. It is decribed by physical parameters. The perception of sound is the awareness of a signal being present and the attribution of certain qual

  15. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making.

  16. [Lateral lumbar disk hernia].

    Science.gov (United States)

    Monod, A; Desmoineaux, P; Deburge, A

    1990-01-01

    Lateral lumbar disc herniations (L.D.H.) develop in the foramen, and compress the nerve root against the overlying vertebral pedicle. In our study of L.D.H. from the clinical, radiographical, and therapeutical aspects, we reviewed 23 cases selected from the 590 patients treated for discal herniation from 1984 to 1987. The frequency of L.D.H. in this series was 3.8 per cent. The clinical pattern brings out some suggestive signs of L.D.H. (frequency of cruralgia, a seldom very positive Lasegue's test, the paucity of spinal signs, non impulsive pain). Saccoradiculography and discography rarely evidenced the L.D.H.. The T.D.M. was the investigation of choice on condition that it was correctly used. When the image was doubtful, disco-CT confirmation should be proceeded too. This latter method of investigation enabled the possibility of sequestration to be explored. 14 patients were treated by chemonucleolysis, with 9 successful outcomes. The 5 failures were cases where chemonucleolysis should not have been indicated, mainly due to associated osseous stenosis. 9 patients underwent immediate surgery with good results in each case.

  17. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  18. Functional hemispheric specialization in processing phonemic and prosodic auditory changes in neonates

    Directory of Open Access Journals (Sweden)

    Takeshi eArimitsu

    2011-09-01

    Full Text Available This study focuses on the early cerebral base of speech perception by examining functional lateralization in neonates for processing segmental and suprasegmental features of speech. For this purpose, auditory evoked responses of full-term neonates to phonemic and prosodic contrasts were measured in their temporal area and part of the frontal and parietal areas using near-infrared spectroscopy (NIRS. Stimuli used here were phonemic contrast /itta/ and /itte/ and prosodic contrast of declarative and interrogative forms /itta/ and /itta?/. The results showed clear hemodynamic responses to both phonemic and prosodic changes in the temporal areas and part of the parietal and frontal regions. In particular, significantly higher hemoglobin (Hb changes were observed for the prosodic change in the right temporal area than for that in the left one, whereas Hb responses to the vowel change were similarly elicited in bilateral temporal areas. However, Hb responses to the vowel contrast were asymmetrical in the parietal area (around supra marginal gyrus, with stronger activation in the left. These results suggest a specialized function of the right hemisphere in prosody processing, which is already present in neonates. The parietal activities during phonemic processing were discussed in relation to verbal-auditory short-term memory. On the basis of this study and previous studies on older infants, the developmental process of functional lateralization from birth to 2 years of age for vowel and prosody was summarized.

  19. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  20. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  1. Multiprofessional committee on auditory health: COMUSA.

    Science.gov (United States)

    Lewis, Doris Ruthy; Marone, Silvio Antonio Monteiro; Mendes, Beatriz C A; Cruz, Oswaldo Laercio Mendonça; Nóbrega, Manoel de

    2010-01-01

    Created in 2007, COMUSA is a multiprofessional committee comprising speech therapy, otology, otorhinolaryngology and pediatrics with the aim of debating and countersigning auditory health actions for neonatal, lactating, preschool and school children, adolescents, adults and elderly persons. COMUSA includes representatives of the Brazilian Audiology Academy (Academia Brasileira de Audiologia or ABA), the Brazilian Otorhinolaryngology and Cervicofacial Surgery Association (Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico Facial or ABORL), the Brazilian Phonoaudiology Society (Sociedade Brasileira de Fonoaudiologia or SBFa), the Brazilian Otology Society (Sociedade Brasileira de Otologia or SBO), and the Brazilian Pediatrics Society (Sociedade Brasileira de Pediatria or SBP).

  2. Musical and auditory hallucinations: A spectrum.

    Science.gov (United States)

    E Fischer, Corinne; Marchie, Anthony; Norris, Mireille

    2004-02-01

    Musical hallucinosis is a rare and poorly understood clinical phenomenon. While an association appears to exist between this phenomenon and organic brain pathology, aging and sensory impairment the precise association remains unclear. The authors present two cases of musical hallucinosis, both in elderly patients with mild-moderate cognitive impairment and mild-moderate hearing loss, who subsequently developed auditory hallucinations and in one case command hallucinations. The literature in reference to musical hallucinosis will be reviewed and a theory relating to the development of musical hallucinations will be proposed.

  3. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000....... PATIENTS: Ten women and 10 men with previously untreated primary cancer. Median age at diagnosis was 67 years (range, 31-87 years). Survival data included 18 patients with at least 2 years of follow-up or recurrence. INTERVENTION: Local canal resection or partial temporal bone resection. MAIN OUTCOME...

  4. CAVERNOUS HEMANGIOMA OF THE INTERNAL AUDITORY CANAL

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Hekmatara

    1993-06-01

    Full Text Available Cavernous hemangioma is a rare benign tumor of the internal auditory canal (IAC of which fourteen cases have been reported so far."nTinnitus and progressive sensorineural hearing loss (SNHL are the chief complaints of the patients. Audiological and radiological planes, CTScan, and magnetic resonance image (MRI studies are helpful in diagnosis. The only choice of treatment is surgery with elective transmastoid trans¬labyrinthine approach. And if tumor is very large, the method of choice will be retrosigmoid approach.

  5. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM.

    Science.gov (United States)

    Irvine, Dexter R F; Fallon, James B; Kamke, Marc R

    2006-04-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period.

  6. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM

    Science.gov (United States)

    Irvine, Dexter R. F.; Fallon, James B.; Kamke, Marc R.

    2007-01-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period. PMID:17572797

  7. Comparison of auditory hallucinations across different disorders and syndromes

    NARCIS (Netherlands)

    Sommer, Iris E. C.; Koops, Sanne; Blom, Jan Dirk

    2012-01-01

    Auditory hallucinations can be experienced in the context of many different disorders and syndromes. The differential diagnosis basically rests on the presence or absence of accompanying symptoms. In terms of clinical relevance, the most important distinction to be made is between auditory hallucina

  8. Development of a central auditory test battery for adults.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Stollman, M.H.P.; Snik, A.F.M.; Broek, P. van den

    2001-01-01

    There is little standardized test material in Dutch to document central auditory processing disorders (CAPDs). Therefore, a new central auditory test battery was composed and standardized for use with adult populations and older children. The test battery comprised seven tests (words in noise, filte

  9. Deactivation of the Parahippocampal Gyrus Preceding Auditory Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    Diederen, Kelly M. J.; Neggers, Sebastiaan F. W.; Daalman, Kirstin; Blom, Jan Dirk; Goekoop, Rutger; Kahn, Rene S.; Sommer, Iris E. C.

    2010-01-01

    Objective: Activation in a network of language-related regions has been reported during auditory verbal hallucinations. It remains unclear, however, how this activation is triggered. Identifying brain regions that show significant signal changes preceding auditory hallucinations might reveal the ori

  10. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  11. Auditory Processing Theories of Language Disorders: Past, Present, and Future

    Science.gov (United States)

    Miller, Carol A.

    2011-01-01

    Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…

  12. Source reliability in auditory health persuasion : Its antecedents and consequences

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie

    2015-01-01

    Persuasive health messages can be presented through an auditory channel, thereby enhancing the salience of the source, making it fundamentally different from written or pictorial information. We focused on the determinants of perceived source reliability in auditory health persuasion by investigatin

  13. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  14. Strategy choice mediates the link between auditory processing and spelling.

    Science.gov (United States)

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  15. Entrainment to an auditory signal: Is attention involved?

    NARCIS (Netherlands)

    Kunert, R.; Jongman, S.R.

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhy

  16. Cortical Auditory Evoked Potentials in Unsuccessful Cochlear Implant Users

    Science.gov (United States)

    Munivrana, Boska; Mildner, Vesna

    2013-01-01

    In some cochlear implant users, success is not achieved in spite of optimal clinical factors (including age at implantation, duration of rehabilitation and post-implant hearing level), which may be attributed to disorders at higher levels of the auditory pathway. We used cortical auditory evoked potentials to investigate the ability to perceive…

  17. Auditory signal design for automatic number plate recognition system

    NARCIS (Netherlands)

    Heydra, C.G.; Jansen, R.J.; Van Egmond, R.

    2014-01-01

    This paper focuses on the design of an auditory signal for the Automatic Number Plate Recognition system of Dutch national police. The auditory signal is designed to alert police officers of suspicious cars in their proximity, communicating priority level and location of the suspicious car and takin

  18. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James;

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  19. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  20. Functional outcome of auditory implants in hearing loss.

    Science.gov (United States)

    Di Girolamo, S; Saccoccio, A; Giacomini, P G; Ottaviani, F

    2007-01-01

    The auditory implant provides a new mechanism for hearing when a hearing aid is not enough. It is the only medical technology able to functionally restore a human sense i.e. hearing. The auditory implant is very different from a hearing aid. Hearing aids amplify sound. Auditory implants compensate for damaged or non-working parts of the inner ear because they can directly stimulate the acoustic nerve. There are two principal types of auditory implant: the cochlear implant and the auditory brainstem implant. They have common basic characteristics, but different applications. A cochlear implant attempts to replace a function lost by the cochlea, usually due to an absence of functioning hair cells; the auditory brainstem implant (ABI) is a modification of the cochlear implant, in which the electrode array is placed directly into the brain when the acoustic nerve is not anymore able to carry the auditory signal. Different types of deaf or severely hearing-impaired patients choose auditory implants. Both children and adults can be candidates for implants. The best age for implantation is still being debated, but most children who receive implants are between 2 and 6 years old. Earlier implantation seems to perform better thanks to neural plasticity. The decision to receive an implant should involve a discussion with many medical specialists and an experienced surgeon.

  1. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    Science.gov (United States)

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  2. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a bas

  3. Auditory Dysfunction and Its Communicative Impact in the Classroom.

    Science.gov (United States)

    Friedrich, Brad W.

    1982-01-01

    The origins and nature of auditory dysfunction in school age children and the role of the audiologist in the evaluation of the learning disabled child are reviewed. Specific structures and mechanisms responsible for the reception and perception of auditory signals are specified. (Author/SEW)

  4. Auditory perceptual simulation: Simulating speech rates or accents?

    Science.gov (United States)

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  5. Use of auditory learning to manage listening problems in children.

    Science.gov (United States)

    Moore, David R; Halliday, Lorna F; Amitay, Sygal

    2009-02-12

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.

  6. Auditory Backward Masking Deficits in Children with Reading Disabilities

    Science.gov (United States)

    Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.

    2005-01-01

    Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…

  7. A Pilot Study of Auditory Integration Training in Autism.

    Science.gov (United States)

    Rimland, Bernard; Edelson, Stephen M.

    1995-01-01

    The effectiveness of Auditory Integration Training (AIT) in 8 autistic individuals (ages 4-21) was evaluated using repeated multiple criteria assessment over a 3-month period. Compared to matched controls, subjects' scores improved on the Aberrant Behavior Checklist and Fisher's Auditory Problems Checklist. AIT did not decrease sound sensitivity.…

  8. Quantification of the auditory startle reflex in children

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; van der Meer, Johan N.; Koelman, Johannes H. T. M.; Boeree, Thijs; Bour, Lo; Tijssen, Marina A. J.

    2009-01-01

    Objective: To find an adequate tool to assess the auditory startle reflex (ASR) in children. Methods: We investigated the effect of stimulus repetition, gender and age on several quantifications of the ASR. ASR's were elicited by eight consecutive auditory stimuli in 27 healthy children. Electromyog

  9. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  10. Linking topography to tonotopy in the mouse auditory thalamocortical circuit

    DEFF Research Database (Denmark)

    Hackett, Troy A; Rinaldi Barkat, Tania; O'Brien, Barbara M J;

    2011-01-01

    The mouse sensory neocortex is reported to lack several hallmark features of topographic organization such as ocular dominance and orientation columns in primary visual cortex or fine-scale tonotopy in primary auditory cortex (AI). Here, we re-examined the question of auditory functional topography...

  11. Perceptual Load Influences Auditory Space Perception in the Ventriloquist Aftereffect

    Science.gov (United States)

    Eramudugolla, Ranmalee; Kamke, Marc. R.; Soto-Faraco, Salvador; Mattingley, Jason B.

    2011-01-01

    A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the "ventriloquist aftereffect", reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their…

  12. Selective attention to phonology dynamically modulates initial encoding of auditory words within the left hemisphere.

    Science.gov (United States)

    Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D

    2014-08-15

    Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may

  13. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA......, 1982, 416-425]. When a “classical” gammatone filterbank was applied within this spectrum-based model, the model largely underestimated human performance at high signal frequencies. However, this limitation could be resolved by employing an auditory filterbank with narrower filters. This novel...

  14. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  15. Diamond heteroepitaxial lateral overgrowth

    Science.gov (United States)

    Tang, Yung-Hsiu

    This dissertation describes improvements in the growth of single crystal diamond by microwave plasma-assisted chemical vapor deposition (CVD). Heteroepitaxial (001) diamond was grown on 1 cm. 2 a-plane sapphiresubstrates using an epitaxial (001) Ir thin-film as a buffer layer. Low-energy ion bombardment of the Ir layer, a process known as bias-enhanced nucleation, is a key step in achieving a high density of diamond nuclei. Bias conditions were optimized to form uniformly-high nucleation densities across the substrates, which led to well-coalesced diamond thin films after short growth times. Epitaxial lateral overgrowth (ELO) was used as a means of decreasing diamond internal stress by impeding the propagation of threading dislocations into the growing material. Its use in diamond growth requires adaptation to the aggressive chemical and thermal environment of the hydrogen plasma in a CVD reactor. Three ELO variants were developed. The most successful utilized a gold (Au) mask prepared by vacuum evaporation onto the surface of a thin heteroepitaxial diamond layer. The Au mask pattern, a series of parallel stripes on the micrometer scale, was produced by standard lift-off photolithography. When diamond overgrows the mask, dislocations are largely confined to the substrate. Differing degrees of confinement were studied by varying the stripe geometry and orientation. Significant improvement in diamond quality was found in the overgrown regions, as evidenced by reduction of the Raman scattering linewidth. The Au layer was found to remain intact during diamond overgrowth and did not chemically bond with the diamond surface. Besides impeding the propagation of threading dislocations, it was discovered that the thermally-induced stress in the CVD diamond was significantly reduced as a result of the ductile Au layer. Cracking and delamination of the diamond from the substrate was mostly eliminated. When diamond was grown to thicknesses above 0.1 mm it was found that

  16. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  17. Neurophysiological Correlates of Visual Dominance: A Lateralized Readiness Potential Investigation

    Science.gov (United States)

    Li, You; Liu, Mingxin; Zhang, Wei; Huang, Sai; Zhang, Bao; Liu, Xingzhou; Chen, Qi

    2017-01-01

    When multisensory information concurrently arrives at our receptors, visual information often receives preferential processing and eventually dominates awareness and behavior. Previous research suggested that the visual dominance effect implicated the prioritizing of visual information into the motor system. In order to further reveal the underpinning neurophysiological mechanism of how visual information is prioritized into the motor system when vision dominates audition, the present study examined the time course of a particular motor activation ERP component, the lateralized readiness potential (LRP), during multisensory competition. The onsets of both stimulus-locked LRP (S-LRP) and response-locked LRP (R-LRP) were measured. Results showed that, the R-LRP onset to the auditory target was delayed about 91 ms when it was paired with a simultaneous presented visual target, compared to that when it was presented by itself. For the visual target, however, the R-LRP onset was comparable irrespective of whether it was paired with an auditory target or not. No significant difference was obtained for the onset of S-LRP. Taken together, the time courses of LRPs indicated that visual information was preferentially processed within the motor system, which coincides with the previous finding that the dorsal visual stream prioritizes the flow of visual information into the motor system.

  18. Neurophysiological Correlates of Visual Dominance: A Lateralized Readiness Potential Investigation.

    Science.gov (United States)

    Li, You; Liu, Mingxin; Zhang, Wei; Huang, Sai; Zhang, Bao; Liu, Xingzhou; Chen, Qi

    2017-01-01

    When multisensory information concurrently arrives at our receptors, visual information often receives preferential processing and eventually dominates awareness and behavior. Previous research suggested that the visual dominance effect implicated the prioritizing of visual information into the motor system. In order to further reveal the underpinning neurophysiological mechanism of how visual information is prioritized into the motor system when vision dominates audition, the present study examined the time course of a particular motor activation ERP component, the lateralized readiness potential (LRP), during multisensory competition. The onsets of both stimulus-locked LRP (S-LRP) and response-locked LRP (R-LRP) were measured. Results showed that, the R-LRP onset to the auditory target was delayed about 91 ms when it was paired with a simultaneous presented visual target, compared to that when it was presented by itself. For the visual target, however, the R-LRP onset was comparable irrespective of whether it was paired with an auditory target or not. No significant difference was obtained for the onset of S-LRP. Taken together, the time courses of LRPs indicated that visual information was preferentially processed within the motor system, which coincides with the previous finding that the dorsal visual stream prioritizes the flow of visual information into the motor system.

  19. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    Science.gov (United States)

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  20. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  1. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  2. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking.

    Science.gov (United States)

    Zorović, M; Hedwig, B

    2011-05-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons.

  3. LATERAL SURVIVAL: AN OT ACCOUNT

    Directory of Open Access Journals (Sweden)

    Moira Yip

    2004-12-01

    Full Text Available When laterals are the targets of phonological processes, laterality may or may not survive. In a fixed feature geometry, [lateral] should be lost if its superordinate node is eliminated by either the spreading of a neighbouring node, or by coda neutralization. So if [lateral] is under Coronal (Blevins 1994, it should be lost under Place assimilation, and if [lateral] is under Sonorant Voicing (Rice & Avery 1991 it should be lost by rules that spread voicing. Yet in some languages lateral survives such spreading intact. Facts like these argue against a universal attachment of [lateral] under either Coronal or Sonorant Voicing, and in favour of an account in terms of markedness constraints on feature-co-occurrence (Padgett 2000. The core of an OT account is that IFIDENTLAT is ranked above whatever causes neutralization, such as SHARE-F or *CODAF. laterality will survive. If these rankings are reversed, we derive languages in which laterality is lost. The other significant factor is markedness. High-ranked feature co-occurrence constraints like *LATDORSAL can block spreading from affecting laterals at all.

  4. Auditory discrimination of force of impact.

    Science.gov (United States)

    Lutfi, Robert A; Liu, Ching-Ju; Stoelinga, Christophe N J

    2011-04-01

    The auditory discrimination of force of impact was measured for three groups of listeners using sounds synthesized according to first-order equations of motion for the homogenous, isotropic bar [Morse and Ingard (1968). Theoretical Acoustics pp. 175-191]. The three groups were professional percussionists, nonmusicians, and individuals recruited from the general population without regard to musical background. In the two-interval, forced-choice procedure, listeners chose the sound corresponding to the greater force of impact as the length of the bar varied from one presentation to the next. From the equations of motion, a maximum-likelihood test for the task was determined to be of the form Δlog A + αΔ log f > 0, where A and f are the amplitude and frequency of any one partial and α = 0.5. Relative decision weights on Δ log f were obtained from the trial-by-trial responses of listeners and compared to α. Percussionists generally outperformed the other groups; however, the obtained decision weights of all listeners deviated significantly from α and showed variability within groups far in excess of the variability associated with replication. Providing correct feedback after each trial had little effect on the decision weights. The variability in these measures was comparable to that seen in studies involving the auditory discrimination of other source attributes.

  5. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction.

  6. Intrinsic modulators of auditory thalamocortical transmission.

    Science.gov (United States)

    Lee, Charles C; Sherman, S Murray

    2012-05-01

    Neurons in layer 4 of the primary auditory cortex receive convergent glutamatergic inputs from thalamic and cortical projections that activate different groups of postsynaptic glutamate receptors. Of particular interest in layer 4 neurons are the Group II metabotropic glutamate receptors (mGluRs), which hyperpolarize neurons postsynaptically via the downstream opening of GIRK channels. This pronounced effect on membrane conductance could influence the neuronal processing of synaptic inputs, such as those from the thalamus, essentially modulating information flow through the thalamocortical pathway. To examine how Group II mGluRs affect thalamocortical transmission, we used an in vitro slice preparation of the auditory thalamocortical pathways in the mouse to examine synaptic transmission under conditions where Group II mGluRs were activated. We found that both pre- and post-synaptic Group II mGluRs are involved in the attenuation of thalamocortical EPSP/Cs. Thus, thalamocortical synaptic transmission is suppressed via the presynaptic reduction of thalamocortical neurotransmitter release and the postsynaptic inhibition of the layer 4 thalamorecipient neurons. This could enable the thalamocortical pathway to autoregulate transmission, via either a gating or gain control mechanism, or both.

  7. Auditory evoked potentials in postconcussive syndrome.

    Science.gov (United States)

    Drake, M E; Weate, S J; Newell, S A

    1996-12-01

    The neuropsychiatric sequelae of minor head trauma have been the source of controversy. Most clinical and imaging studies have shown no alteration after concussion, but neuropsychological and neuropathological abnormalities have been reported. Some changes in neurophysiologic diagnostic tests have been described in postconcussive syndrome. We recorded middle latency auditory evoked potentials (MLR) and slow vertex responses (SVR) in 20 individuals with prolonged cognitive difficulties, behavior changes, dizziness, and headache after concussion. MLR is utilized alternating polarity clicks presented monaurally at 70 dB SL at 4 per second, with 40 dB contralateral masking. Five hundred responses were recorded and replicated from Cz-A1 and Cz-A2, with 50 ms. analysis time and 20-1000 Hz filter band pass. SVRs were recorded with the same montage, but used rarefaction clicks, 0.5 Hz stimulus rate, 500 ms. analysis time, and 1-50 Hz filter band pass. Na and Pa MLR components were reduced in amplitude in postconcussion patients. Pa latency was significantly longer in patients than in controls. SVR amplitudes were longer in concussed individuals, but differences in latency and amplitude were not significant. These changes may reflect posttraumatic disturbance in presumed subcortical MLR generators, or in frontal or temporal cortical structures that modulate them. Middle and long-latency auditory evoked potentials may be helpful in the evaluation of postconcussive neuropsychiatric symptoms.

  8. Auditory verbal hallucinations: neuroimaging and treatment.

    Science.gov (United States)

    Bohlken, M M; Hugdahl, K; Sommer, I E C

    2017-01-01

    Auditory verbal hallucinations (AVH) are a frequently occurring phenomenon in the general population and are considered a psychotic symptom when presented in the context of a psychiatric disorder. Neuroimaging literature has shown that AVH are subserved by a variety of alterations in brain structure and function, which primarily concentrate around brain regions associated with the processing of auditory verbal stimuli and with executive control functions. However, the direction of association between AVH and brain function remains equivocal in certain research areas and needs to be carefully reviewed and interpreted. When AVH have significant impact on daily functioning, several efficacious treatments can be attempted such as antipsychotic medication, brain stimulation and cognitive-behavioural therapy. Interestingly, the neural correlates of these treatments largely overlap with brain regions involved in AVH. This suggests that the efficacy of treatment corresponds to a normalization of AVH-related brain activity. In this selected review, we give a compact yet comprehensive overview of the structural and functional neuroimaging literature on AVH, with a special focus on the neural correlates of efficacious treatment.

  9. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  10. Talker-specific auditory imagery during reading

    Science.gov (United States)

    Nygaard, Lynne C.; Duke, Jessica; Kawar, Kathleen; Queen, Jennifer S.

    2004-05-01

    The present experiment was designed to determine if auditory imagery during reading includes talker-specific characteristics such as speaking rate. Following Kosslyn and Matt (1977), participants were familiarized with two talkers during a brief prerecorded conversation. One talker spoke at a fast speaking rate and one spoke at a slow speaking rate. During familiarization, participants were taught to identify each talker by name. At test, participants were asked to read two passages and told that either the slow or fast talker wrote each passage. In one condition, participants were asked to read each passage aloud, and in a second condition, they were asked to read each passage silently. Participants pressed a key when they had completed reading the passage, and reading times were collected. Reading times were significantly slower when participants thought they were reading a passage written by the slow talker than when reading a passage written by the fast talker. However, the effects of speaking rate were only present in the reading-aloud condition. Additional experiments were conducted to investigate the role of attention to talker's voice during familiarization. These results suggest that readers may engage in auditory imagery while reading that preserves perceptual details of an author's voice.

  11. Amyotrophic lateral sclerosis

    Directory of Open Access Journals (Sweden)

    Leigh P Nigel

    2009-02-01

    Full Text Available Abstract Amyotrophic lateral sclerosis (ALS is a neurodegenerative disease characterised by progressive muscular paralysis reflecting degeneration of motor neurones in the primary motor cortex, corticospinal tracts, brainstem and spinal cord. Incidence (average 1.89 per 100,000/year and prevalence (average 5.2 per100,000 are relatively uniform in Western countries, although foci of higher frequency occur in the Western Pacific. The mean age of onset for sporadic ALS is about 60 years. Overall, there is a slight male prevalence (M:F ratio~1.5:1. Approximately two thirds of patients with typical ALS have a spinal form of the disease (limb onset and present with symptoms related to focal muscle weakness and wasting, where the symptoms may start either distally or proximally in the upper and lower limbs. Gradually, spasticity may develop in the weakened atrophic limbs, affecting manual dexterity and gait. Patients with bulbar onset ALS usually present with dysarthria and dysphagia for solid or liquids, and limbs symptoms can develop almost simultaneously with bulbar symptoms, and in the vast majority of cases will occur within 1–2 years. Paralysis is progressive and leads to death due to respiratory failure within 2–3 years for bulbar onset cases and 3–5 years for limb onset ALS cases. Most ALS cases are sporadic but 5–10% of cases are familial, and of these 20% have a mutation of the SOD1 gene and about 2–5% have mutations of the TARDBP (TDP-43 gene. Two percent of apparently sporadic patients have SOD1 mutations, and TARDBP mutations also occur in sporadic cases. The diagnosis is based on clinical history, examination, electromyography, and exclusion of 'ALS-mimics' (e.g. cervical spondylotic myelopathies, multifocal motor neuropathy, Kennedy's disease by appropriate investigations. The pathological hallmarks comprise loss of motor neurones with intraneuronal ubiquitin-immunoreactive inclusions in upper motor neurones and TDP-43

  12. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  13. Fast-spiking GABA circuit dynamics in the auditory cortex predict recovery of sensory processing following peripheral nerve damage.

    Science.gov (United States)

    Resnik, Jennifer; Polley, Daniel B

    2017-03-21

    Cortical neurons remap their receptive fields and rescale sensitivity to spared peripheral inputs following sensory nerve damage. To address how these plasticity processes are coordinated over the course of functional recovery, we tracked receptive field reorganization, spontaneous activity, and response gain from individual principal neurons in the adult mouse auditory cortex over a 50-day period surrounding either moderate or massive auditory nerve damage. We related the day-by-day recovery of sound processing to dynamic changes in the strength of intracortical inhibition from parvalbumin-expressing (PV) inhibitory neurons. Whereas the status of brainstem-evoked potentials did not predict the recovery of sensory responses to surviving nerve fibers, homeostatic adjustments in PV-mediated inhibition during the first days following injury could predict the eventual recovery of cortical sound processing weeks later. These findings underscore the potential importance of self-regulated inhibitory dynamics for the restoration of sensory processing in excitatory neurons following peripheral nerve injuries.

  14. Missing and delayed auditory responses in young and older children with autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    J. Christopher eEdgar

    2014-06-01

    Full Text Available Background: The development of left and right superior temporal gyrus (STG 50ms (M50 and 100ms (M100 auditory responses in typically developing children (TD and in children with autism spectrum disorder (ASD was examined. It was hypothesized that (1 M50 responses would be observed equally often in younger and older children, (2 M100 responses would be observed more often in older than younger children indicating later development of secondary auditory areas, and (3 M100 but not M50 would be observed less often in ASD than TD in both age groups, reflecting slower maturation of later developing auditory areas in ASD. Methods: 35 typically developing controls, 63 ASD without language impairment (ASD-LI, and 38 ASD with language impairment (ASD+LI were recruited.The presence or absence of a STG M50 and M100 was scored. Subjects were grouped into younger (6 to 10-years-old and older groups (11 to 15-years-old. Results: Although M50 responses were observed equally often in older and younger subjects and equally often in TD and ASD, left and right M50 responses were delayed in ASD-LI and ASD+LI. Group comparisons showed that in younger subjects M100 responses were observed more often in TD than ASD+LI (90% vs 66%, p=0.04, with no differences between TD and ASD-LI (90% vs 76% p=0.14 or between ASD-LI and ASD+LI (76% vs 66%, p=0.53. In older subjects, whereas no differences were observed between TD and ASD+LI, responses were observed more often in ASD-LI than ASD+LI. Conclusions: Although present in all groups, M50 responses were delayed in ASD, suggesting delayed development of earlier developing auditory areas. Examining the TD data, findings indicated that by 11 years a right M100 should be observed in 100% of subjects and a left M100 in 80% of subjects. Thus, by 11years, lack of a left and especially right M100 offers neurobiological insight into sensory processing that may underlie language or cognitive impairment.

  15. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  16. The Study of Frequency Self Care Strategies against Auditory Hallucinations

    Directory of Open Access Journals (Sweden)

    Mahin Nadem

    2012-03-01

    Full Text Available Background: In schizophrenic clients, self-care strategies against auditory hallucinations can decrease disturbances results in hallucination. This study was aimed to assess frequency of self-care strategies against auditory hallucinations in paranoid schizophrenic patients, hospitalized in Shafa Hospital.Materials and Method: This was a descriptive study on 201 patients with paranoid schizophrenia hospitalized in psychiatry unit with convenience sampling in Rasht. The gathered data consists of two parts, first unit demographic characteristic and the second part, self- report questionnaire include 38 items about self-care strategies.Results: There were statistically significant relationship between demographic variables and knowledg effect and self-care strategies against auditory hallucinaions. Sex with phisical domain p0.07, marriage status with cognitive domain (p>0.07 and life status with behavioural domain (p>0.01. 53.2% of reported type of our auditory hallucinations were command hallucinations, furtheremore the most effective self-care strategies against auditory hallucinations were from physical domain and substance abuse (82.1% was the most effective strategies in this domain.Conclusion: The client with paranoid schizophrenia used more than physical domain strategies against auditory hallucinaions and this result highlight need those to approprait nursing intervention. Instruction and leading about selection the effective self-care strategies against auditory ha

  17. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  18. Auditory-perceptual learning improves speech motor adaptation in children.

    Science.gov (United States)

    Shiller, Douglas M; Rochon, Marie-Lyne

    2014-08-01

    Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.

  19. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record

  20. A corollary discharge maintains auditory sensitivity during sound production.

    Science.gov (United States)

    Poulet, James F A; Hedwig, Berthold

    2002-08-22

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  1. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  2. Cochlear Responses and Auditory Brainstem Response Functions in Adults with Auditory Neuropathy/ Dys-Synchrony and Individuals with Normal Hearing

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2007-06-01

    Full Text Available Background and Aim: Physiologic measures of cochlear and auditory nerve function may be of assis¬tance in distinguishing between hearing disorders due primarily to auditory nerve impairment from those due primarily to cochlear hair cells dysfunction. The goal of present study was to measure of co-chlear responses (otoacoustic emissions and cochlear microphonics and auditory brainstem response in some adults with auditory neuropathy/ dys-synchrony and subjects with normal hearing. Materials and Methods: Patients were 16 adults (32 ears in age range of 14-30 years with auditory neu¬ropathy/ dys-synchrony and 16 individuals in age range of 16-30 years from both sexes. The results of transient otoacoustic emissions, cochlear microphonics and auditory brainstem response measures were compared in both groups and the effects of age, sex, ear and degree of hearing loss were studied. Results: The pure-tone average was 48.1 dB HL in auditory neuropathy/dys-synchrony group and the fre¬quency of low tone loss and flat audiograms were higher among other audiogram's shapes. Transient oto¬acoustic emissions were shown in all auditory neuropathy/dys-synchrony people except two cases and its average was near in both studied groups. The latency and amplitude of the biggest reversed co-chlear microphonics response were higher in auditory neuropathy/dys-synchrony patients than control peo¬ple significantly. The correlation between cochlear microphonics amplitude and degree of hearing loss was not significant, and age had significant effect in some cochlear microphonics measures. Audi-tory brainstem response had no response in auditory neuropathy/dys-synchrony patients even with low stim¬uli rates. Conclusion: In adults with speech understanding worsen than predicted from the degree of hearing loss that suspect to auditory neuropathy/ dys-synchrony, the frequency of low tone loss and flat audiograms are higher. Usually auditory brainstem response is absent in

  3. On the planum temporale lateralization in suprasegmental speech perception: evidence from a study investigating behavior, structure, and function.

    Science.gov (United States)

    Liem, Franziskus; Hurschler, Martina A; Jäncke, Lutz; Meyer, Martin

    2014-04-01

    This study combines functional and structural magnetic resonance imaging to test the "asymmetric sampling in time" (AST) hypothesis, which makes assertions about the symmetrical and asymmetrical representation of speech in the primary and nonprimary auditory cortex. Twenty-three volunteers participated in this parametric clustered-sparse fMRI study. The availability of slowly changing acoustic cues in spoken sentences was systematically reduced over continuous segments with varying lengths (100, 150, 200, 250 ms) by utilizing local time-reversion. As predicted by the hypothesis, functional lateralization in Heschl's gyrus could not be observed. Lateralization in the planum temporale and posterior superior temporal gyrus shifted towards the right hemisphere with decreasing suprasegmental temporal integrity. Cortical thickness of the planum temporale was automatically measured. Participants with an L > R cortical thickness performed better on the in-scanner auditory pattern-matching task. Taken together, these findings support the AST hypothesis and provide substantial novel insight into the division of labor between left and right nonprimary auditory cortex functions during comprehension of spoken utterances. In addition, the present data yield support for a structural-behavioral relationship in the nonprimary auditory cortex.

  4. Medio-lateral postural instability in subjects with tinnitus

    Directory of Open Access Journals (Sweden)

    Zoi eKapoula

    2011-05-01

    Full Text Available Background: Many patients show modulation of tinnitus by gaze, jaw or neck movements, reflecting abnormal sensorimotor integration and interaction between various inputs. Postural control is based on multi-sensory integration (visual, vestibular, somatosensory, and oculomotor and indeed there is now evidence that posture can also be influenced by sound. Perhaps tinnitus influences posture similarly to external sound. This study examines the quality of postural performance in quiet stance in patients with modulated tinnitus.Methods: Twenty-three patients with highly modulated tinnitus were selected in the ENT service. Twelve reported exclusively or predominately left tinnitus, eight right and three bilateral. Eighteen control subjects were also tested. Subjects were asked to fixate a target at 40cm for 51s; posturography was performed with the platform (Technoconcept, 40Hz for both the eyes open and eyes closed conditions.Results: For both conditions, tinnitus subjects showed abnormally high lateral body sway (SDx. This was corroborated by fast Fourrier Transformation (FFTx and wavelet analysis. For patients with left tinnitus only, medio-lateral sway increased significantly when looking away from the center. Conclusions: Similarly to external sound stimulation, tinnitus could influence lateral sway by activating attention shift, and perhaps vestibular responses. Poor integration of sensorimotor signals is another possibility. Such abnormalities would be accentuated in left tinnitus because of the importance of the right cerebral cortex in processing both auditory-tinnitus and attention.

  5. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials

    OpenAIRE

    Calderón-Garcidueñas, Lilian; D’Angiulli, Amedeo; Kulesza, Randy J.; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M.; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-01-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3± 8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p

  6. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  7. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  8. Tiapride for the treatment of auditory hallucinations in schizophrenia

    Directory of Open Access Journals (Sweden)

    Sagar Karia

    2013-01-01

    Full Text Available Hallucinations are considered as core symptoms of psychosis by both International Classification of Diseases-10 (ICD-10 and Diagnostic and Statistical Manual for the Classification of Psychiatric Disorders - 4 th edition text revised (DSM-IV TR. The most common types of hallucinations in patients with schizophrenia are auditory in nature followed by visual hallucinations. Few patients with schizophrenia have persisting auditory hallucinations despite all other features of schizophrenia having being improved. Here, we report two cases where tiapride was useful as an add-on drug for treating persistent auditory hallucinations.

  9. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  10. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien;

    2016-01-01

    whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...... decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between...

  11. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... environments (VAEs) with an array of loudspeakers. The LoRA system combines state-of-the-art acoustic room models with sound-field reproduction techniques. Limitations of these two techniques were taken into consideration together with the limitations of the human auditory system to localize sounds...

  12. DNA methyltransferase activity is required for memory-related neural plasticity in the lateral amygdala.

    Science.gov (United States)

    Maddox, Stephanie A; Watts, Casey S; Schafe, Glenn E

    2014-01-01

    We have previously shown that auditory Pavlovian fear conditioning is associated with an increase in DNA methyltransferase (DNMT) expression in the lateral amygdala (LA) and that intra-LA infusion or bath application of an inhibitor of DNMT activity impairs the consolidation of an auditory fear memory and long-term potentiation (LTP) at thalamic and cortical inputs to the LA, in vitro. In the present study, we use awake behaving neurophysiological techniques to examine the role of DNMT activity in memory-related neurophysiological changes accompanying fear memory consolidation and reconsolidation in the LA, in vivo. We show that auditory fear conditioning results in a training-related enhancement in the amplitude of short-latency auditory-evoked field potentials (AEFPs) in the LA. Intra-LA infusion of a DNMT inhibitor impairs both fear memory consolidation and, in parallel, the consolidation of training-related neural plasticity in the LA; that is, short-term memory (STM) and short-term training-related increases in AEFP amplitude in the LA are intact, while long-term memory (LTM) and long-term retention of training-related increases in AEFP amplitudes are impaired. In separate experiments, we show that intra-LA infusion of a DNMT inhibitor following retrieval of an auditory fear memory has no effect on post-retrieval STM or short-term retention of training-related changes in AEFP amplitude in the LA, but significantly impairs both post-retrieval LTM and long-term retention of AEFP amplitude changes in the LA. These findings are the first to demonstrate the necessity of DNMT activity in the consolidation and reconsolidation of memory-associated neural plasticity, in vivo.

  13. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine...... the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using...... in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis...

  14. A computer model of auditory stream segregation.

    Science.gov (United States)

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  15. Hearing Restoration with Auditory Brainstem Implant

    Science.gov (United States)

    NAKATOMI, Hirofumi; MIYAWAKI, Satoru; KIN, Taichi; SAITO, Nobuhito

    2016-01-01

    Auditory brainstem implant (ABI) technology attempts to restore hearing in deaf patients caused by bilateral cochlear nerve injury through the direct stimulation of the brainstem, but many aspects of the related mechanisms remain unknown. The unresolved issues can be grouped into three topics: which patients are the best candidates; which type of electrode should be used; and how to improve restored hearing. We evaluated our experience with 11 cases of ABI placement. We found that if at least seven of eleven electrodes of the MED-EL ABI are effectively placed in a patient with no deformation of the fourth ventricle, open set sentence recognition of approximately 20% and closed set word recognition of approximately 65% can be achieved only with the ABI. Appropriate selection of patients for ABI placement can lead to good outcomes. Further investigation is required regarding patient selection criteria and methods of surgery for effective ABI placement. PMID:27464470

  16. Changes of brainstem auditory and somatosensory evoked

    Institute of Scientific and Technical Information of China (English)

    Yang Jian

    2000-01-01

    Objective: to investigate the characteristics and clinical value of evoked potentials in late infantile form of metachromatic leukodystrophy. Methods: Brainstem auditory, and somatosensory evoked potentials were recorded in 6 patients, and compared with the results of CT scan. Results: All of the 6 patients had abnormal results of BAEP and MNSEP. The main abnormal parameters in BAEP were latency prolongation in wave I, inter-peak latency prolongation in Ⅰ-Ⅲ and Ⅰ-Ⅴ. The abnormal features of MNSEP were low amplitude and absence of wave N9, inter-Peak latency prolongation in Ng-N13 and N13-N20, but no significant change of N20 amplitude. The results also revealed that abnormal changes in BAEP and MNSEP were earlier than that in CT. Conclusion: The detection of BAEP and MNSEP in late infantile form of metachromatic leukodystrophy might early reveal the abnormality of conductive function in nervous system and might be a useful method in diagnosis.

  17. Discrimination of auditory stimuli during isoflurane anesthesia.

    Science.gov (United States)

    Rojas, Manuel J; Navas, Jinna A; Greene, Stephen A; Rector, David M

    2008-10-01

    Deep isoflurane anesthesia initiates a burst suppression pattern in which high-amplitude bursts are preceded by periods of nearly silent electroencephalogram. The burst suppression ratio (BSR) is the percentage of suppression (silent electroencephalogram) during the burst suppression pattern and is one parameter used to assess anesthesia depth. We investigated cortical burst activity in rats in response to different auditory stimuli presented during the burst suppression state. We noted a rapid appearance of bursts and a significant decrease in the BSR during stimulation. The BSR changes were distinctive for the different stimuli applied, and the BSR decreased significantly more when stimulated with a voice familiar to the rat as compared with an unfamiliar voice. These results show that the cortex can show differential sensory responses during deep isoflurane anesthesia.

  18. Low power adder based auditory filter architecture.

    Science.gov (United States)

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  19. Resting Heart Rate and Auditory Evoked Potential

    Directory of Open Access Journals (Sweden)

    Simone Fiuza Regaçone

    2015-01-01

    Full Text Available The objective of this study was to evaluate the association between rest heart rate (HR and the components of the auditory evoked-related potentials (ERPs at rest in women. We investigated 21 healthy female university students between 18 and 24 years old. We performed complete audiological evaluation and measurement of heart rate for 10 minutes at rest (heart rate monitor Polar RS800CX and performed ERPs analysis (discrepancy in frequency and duration. There was a moderate negative correlation of the N1 and P3a with rest HR and a strong positive correlation of the P2 and N2 components with rest HR. Larger components of the ERP are associated with higher rest HR.

  20. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  1. Tonotopic organization of human auditory association cortex.

    Science.gov (United States)

    Cansino, S; Williamson, S J; Karron, D

    1994-11-07

    Neuromagnetic studies of responses in human auditory association cortex for tone burst stimuli provide evidence for a tonotopic organization. The magnetic source image for the 100 ms component evoked by the onset of a tone is qualitatively similar to that of primary cortex, with responses lying deeper beneath the scalp for progressively higher tone frequencies. However, the tonotopic sequence of association cortex in three subjects is found largely within the superior temporal sulcus, although in the right hemisphere of one subject some sources may be closer to the inferior temporal sulcus. The locus of responses for individual subjects suggests a progression across the cortical surface that is approximately proportional to the logarithm of the tone frequency, as observed previously for primary cortex, with the span of 10 mm for each decade in frequency being comparable for the two areas.

  2. Genetics of auditory mechano-electrical transduction.

    Science.gov (United States)

    Michalski, Nicolas; Petit, Christine

    2015-01-01

    The hair bundles of cochlear hair cells play a central role in the auditory mechano-electrical transduction (MET) process. The identification of MET components and of associated molecular complexes by biochemical approaches is impeded by the very small number of hair cells within the cochlea. In contrast, human and mouse genetics have proven to be particularly powerful. The study of inherited forms of deafness led to the discovery of several essential proteins of the MET machinery, which are currently used as entry points to decipher the associated molecular networks. Notably, MET relies not only on the MET machinery but also on several elements ensuring the proper sound-induced oscillation of the hair bundle or the ionic environment necessary to drive the MET current. Here, we review the most significant advances in the molecular bases of the MET process that emerged from the genetics of hearing.

  3. A Pascalian lateral drift sensor

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, H., E-mail: hendrik.jansen@desy.de

    2016-09-21

    A novel concept of a layer-wise produced semiconductor sensor for precise particle tracking is proposed herein. In contrast to common semiconductor sensors, local regions with increased doping concentration deep in the bulk termed charge guides increase the lateral drift of free charges on their way to the read-out electrode. This lateral drift enables charge sharing independent of the incident position of the traversing particle. With a regular grid of charge guides the lateral charge distribution resembles a normalised Pascal's triangle for particles that are stopped in depths lower than the depth of the first layer of the charge guides. For minimum ionising particles a sum of binomial distributions describes the lateral charge distribution. This concept decouples the achievable sensor resolution from the pitch size as the characteristic length is replaced by the lateral distance of the charge guides.

  4. A Pascalian lateral drift sensor

    Science.gov (United States)

    Jansen, H.

    2016-09-01

    A novel concept of a layer-wise produced semiconductor sensor for precise particle tracking is proposed herein. In contrast to common semiconductor sensors, local regions with increased doping concentration deep in the bulk termed charge guides increase the lateral drift of free charges on their way to the read-out electrode. This lateral drift enables charge sharing independent of the incident position of the traversing particle. With a regular grid of charge guides the lateral charge distribution resembles a normalised Pascal's triangle for particles that are stopped in depths lower than the depth of the first layer of the charge guides. For minimum ionising particles a sum of binomial distributions describes the lateral charge distribution. This concept decouples the achievable sensor resolution from the pitch size as the characteristic length is replaced by the lateral distance of the charge guides.

  5. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  6. Effects of pitch on auditory number comparisons.

    Science.gov (United States)

    Campbell, Jamie I D; Scheepers, Florence

    2015-05-01

    Three experiments investigated interactions between auditory pitch and the numerical quantities represented by spoken English number words. In Experiment 1, participants heard a pair of sequential auditory numbers in the range zero to ten. They pressed a left-side or right-side key to indicate if the second number was lower or higher in numerical value. The vocal pitches of the two numbers either ascended or descended so that pitch change was congruent or incongruent with number change. The error rate was higher when pitch and number were incongruent relative to congruent trials. The distance effect on RT (i.e., slower responses for numerically near than far number pairs) occurred with pitch ascending but not descending. In Experiment 2, to determine if these effects depended on the left/right spatial mapping of responses, participants responded "yes" if the second number was higher and "no" if it was lower. Again, participants made more number comparison errors when number and pitch were incongruent, but there was no distance × pitch order effect. To pursue the latter, in Experiment 3, participants were tested with response buttons assigned left-smaller and right-larger ("normal" spatial mapping) or the reverse mapping. Participants who received normal mapping first presented a distance effect with pitch ascending but not descending as in Experiment 1, whereas participants who received reverse mapping first presented a distance effect with pitch descending but not ascending. We propose that the number and pitch dimensions of stimuli both activated spatial representations and that strategy shifts from quantity comparison to order processing were induced by spatial incongruities.

  7. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    Science.gov (United States)

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  8. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  9. Oscillatory Cortical Network Involved in Auditory Verbal Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    van Lutterveld, Remko; Hillebrand, Arjan; Diederen, Kelly M. J.; Daalman, Kirstin; Kahn, Rene S.; Stam, Cornelis J.; Sommer, Iris E. C.

    2012-01-01

    Background: Auditory verbal hallucinations (AVH), a prominent symptom of schizophrenia, are often highly distressing for patients. Better understanding of the pathogenesis of hallucinations could increase therapeutic options. Magnetoencephalography (MEG) provides direct measures of neuronal activity

  10. Ion channel noise can explain firing correlation in auditory nerves.

    Science.gov (United States)

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels.

  11. Auditory hallucinations in childhood : associations with adversity and delusional ideation

    NARCIS (Netherlands)

    Bartels-Velthuis, A. A.; van de Willige, G.; Jenner, J. A.; Wiersma, D.; van Os, J.

    2012-01-01

    Background. Previous work suggests that exposure to childhood adversity is associated with the combination of delusions and hallucinations. In the present study, associations between (severity of) auditory vocal hallucinations (AVH) and (i) social adversity [traumatic experiences (TE) and stressful

  12. Modality specific neural correlates of auditory and somatic hallucinations

    Science.gov (United States)

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  13. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  14. Auditory short-term memory activation during score reading.

    Directory of Open Access Journals (Sweden)

    Veerle L Simoens

    Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  15. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  16. Visual change detection recruits auditory cortices in early deafness.

    Science.gov (United States)

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  17. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2

    OpenAIRE

    2016-01-01

    Introduction “Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action” (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution a...

  18. Auditory stream formation affects comodulation masking release retroactively

    DEFF Research Database (Denmark)

    Dau, Torsten; Ewert, Stephan; Oxenham, A. J.

    2009-01-01

    in terms of the sequence of "postcursor" flanking bands forming a perceptual stream with the original flanking bands, resulting in perceptual segregation of the flanking bands from the masker. The results are consistent with the idea that modulation analysis occurs within, not across, auditory objects......, and that across-frequency CMR only occurs if the on-frequency and flanking bands fall within the same auditory object or stream....

  19. Auditory neuropathy spectrum disorder in a child with albinism

    Directory of Open Access Journals (Sweden)

    Mayur Bhat

    2016-01-01

    Full Text Available Albinism is a congenital disorder characterized by complete or partial absence of pigments in the skin, eyes, and hair due to the absence or defective melanin production. As a result of that, there will be disruption seen in auditory pathways along with other areas. Therefore, the aim of the present study is to highlight the underlying auditory neural deficits seen in albinism and discuss the role of audiologist in these cases.

  20. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  1. Auditory hypersensitivity in children and teenagers with autistic spectrum disorder

    OpenAIRE

    2004-01-01

    OBJECTIVE: To verify if the clinical behavior of auditory hypersensitivity, reported in interviews with parents/caregivers and therapists/teachers of 46 children and teenagers suffering from autistic spectrum disorder, correspond to audiological findings. METHOD: The clinical diagnosis for auditory hypersensitivity was investigated by means of an interview. Subsequently, a test of the acoustic stapedial reflex was conducted, and responses to intense acoustic stimulus in open field were observ...

  2. Auditory stream segregation in children with Asperger syndrome

    OpenAIRE

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E. (Eira); Nieminen-von Wendt, T.; Kujala, T. (Tiia)

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically deve...

  3. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  4. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  5. A unique cellular scaling rule in the avian auditory system.

    Science.gov (United States)

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.

  6. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  7. [Analysis of auditory information in the brain of the cetacean].

    Science.gov (United States)

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  8. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  9. An anatomical and functional topography of human auditory cortical areas.

    Science.gov (United States)

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that-whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis-the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  10. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  11. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  12. Verrucous Carcinoma in External Auditory Canal – A Rare Case

    Directory of Open Access Journals (Sweden)

    Md Zillur Rahman

    2013-05-01

    Full Text Available Verrucous carcinoma is a variant of squamous cell carcinoma. It is of low grade malignancy and rarely present with distant metastasis. Oral cavity is the commonest site of this tumour, other sites are larynx, oesophagus and genitalia. Verrucous carcinoma in external auditory canal is extremely rare. This is the presentation of a 45 years old woman who came to the ENT & Head Neck Surgery department of Delta medical college, Dhaka, Bangladesh with discharging left ear and impairment of hearing on the same side for 7 years. Otoscopic examination showed a mass occupying almost whole of the external auditory canal and the overlying skin was thickened, papillary and blackish. Cytology from external auditory canal scrap showed hyperkeratosis and parakeratosis. External auditory canal bone was found eroded at some parts. Excision of the mass was done under microscope. Split thickness skin grafting was done in external auditory canal. The mass was diagnosed as verrucous carcinoma on histopathological examination. Afterwards she was given radiotherapy. Six months follow up showed no recurrence and healthy epithelialization of external auditory canal.

  13. Never too late? An advantage on tests of auditory attention extends to late bilinguals.

    Directory of Open Access Journals (Sweden)

    Thomas Hieronymus Bak

    2014-05-01

    Full Text Available Recent studies, using predominantly visual tasks, indicate that early bilinguals outperform monolinguals on attention tests. It remains less clear whether such advantages extend to those bilinguals who have acquired their second language later in life.We examined this question in 38 monolingual and 60 bilingual university students. The bilingual group was further subdivided into early childhood, late childhood and early adulthood bilinguals. The assessment consisted of five subtests from the clinically validated Test of Everyday Attention (TEA. Overall, bilinguals outperformed monolinguals on auditory attention tests, but not on visual search tasks. The latter observation suggests that the differences between bilinguals and monolinguals are specific and not due to a generally higher cognitive performance in bilinguals.Within the bilingual group, early childhood bilinguals showed a larger advantage on attention switching, late childhood/early adulthood bilinguals on selective attention. We conclude that the bilingual advantage extends into the auditory domain and is not confined to childhood bilinguals, although its scope might be slightly different in early and late bilinguals.

  14. Correlation of auditory brain stem response and the MRI measurements in neuro-degenerative disorders

    Energy Technology Data Exchange (ETDEWEB)

    Kamei, Hidekazu (Tokyo Women' s Medical Coll. (Japan))

    1989-06-01

    The purpose of this study is to elucidate correlations of several MRI measurements of the cranium and brain, functioning as a volume conductor, to the auditory brain stem response (ABR) in neuro-degenerative disorders. The subjects included forty-seven patients with spinocerebellar degeneration (SCD) and sixteen of amyotrophic lateral sclerosis (ALS). Statistically significant positive correlations were found between I-V and III-V interpeak latencies (IPLs) and the area of cranium and brain in the longitudinal section of SCD patients, and between I-III and III-V IPLs and the area in the longitudinal section of those with ALS. And, also there were statistically significant correlations between the amplitude of the V wave and the area of brain stem as well as that of the cranium in the longitudinal section of SCD patients, and between the amplitude of the V wave and the area of the cerebrum in the longitudinal section of ALS. In conclusion, in the ABR, the IPLs were prolonged and the amplitude of the V wave was decreased while the MRI size of the cranium and brain increased. When the ABR is applied to neuro-degenerative disorders, it might be important to consider not only the conduction of the auditory tracts in the brain stem, but also the correlations of the size of the cranium and brain which act as a volume conductor. (author).

  15. Segregation of vowels and consonants in human auditory cortex: Evidence for distributed hierarchical organization

    Directory of Open Access Journals (Sweden)

    Jonas eObleser

    2010-12-01

    Full Text Available The speech signal consists of a continuous stream of consonants and vowels, which must be de– and encoded in human auditory cortex to ensure the robust recognition and categorization of speech sounds. We used small-voxel functional magnetic resonance imaging (fMRI to study information encoded in local brain activation patterns elicited by consonant-vowel syllables, and by a control set of noise bursts.First, activation of anterior–lateral superior temporal cortex was seen when controlling for unspecific acoustic processing (syllables versus band-passed noises, in a classic subtraction-based design. Second, a classifier algorithm, which was trained and tested iteratively on data from all subjects to discriminate local brain activation patterns, yielded separations of cortical patches discriminative of vowel category versus patches discriminative of stop-consonant category across the entire superior temporal cortex, yet with regional differences in average classification accuracy. Overlap (voxels correctly classifying both speech sound categories was surprisingly sparse. Third, lending further plausibility to the results, classification of speech–noise differences was generally superior to speech–speech classifications, with the notable exception of a left anterior region, where speech–speech classification accuracies were significantly better.These data demonstrate that acoustic-phonetic features are encoded in complex yet sparsely overlapping local patterns of neural activity distributed hierarchically across different regions of the auditory cortex. The redundancy apparent in these multiple patterns may partly explain the robustness of phonemic representations.

  16. Amygdalar auditory neurons contribute to self-other distinction during ultrasonic social vocalization in rats

    Directory of Open Access Journals (Sweden)

    Jumpei Matsumoto

    2016-09-01

    Full Text Available Although clinical studies reported hyperactivation of the auditory system and amygdala in patients with auditory hallucinations (hearing others’ but not one’s own voice, independent of any external stimulus, neural mechanisms of self/other attribution is not well understood. We recorded neuronal responses in the dorsal amygdala including the lateral amygdaloid nucleus to ultrasonic vocalization (USVs emitted by subjects and conspecifics during free social interaction in 16 adult male rats. The animals emitting the USVs were identified by EMG recordings. One-quarter of the amygdalar neurons (15/60 responded to 50 kHz calls by the subject and/or conspecifics. Among the responsive neurons, most neurons (Type-Other neurons (73%, 11/15 responded only to calls by conspecifics but not subjects. Two Type-Self neurons (13%, 2/15 responded to calls by the subject but not those by conspecifics, although their response selectivity to subjects vs. conspecifics was lower than that of Type-Other neurons. The remaining two neurons (13% responded to calls by both the subject and conspecifics. Furthermore, population coding of the amygdalar neurons represented distinction of subject vs. conspecific calls. The present results provide the first neurophysiological evidence that the amygdala discriminately represents affective social calls by subject and conspecifics. These findings suggest that the amygdala is an important brain region for self/other attribution. Furthermore, pathological activation of the amygdala, where Type-Other neurons predominate, could induce external misattribution of percepts of vocalization.

  17. Basic auditory processing is related to familial risk, not to reading fluency: an ERP study.

    Science.gov (United States)

    Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L

    2015-02-01

    Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART), intensity and frequency differences was measured with event-related potentials (ERPs). ERP components of interest are components reflective of auditory change detection; the mismatch negativity (MMN) and late discriminative negativity (LDN). All groups had an MMN to changes in ART and frequency, but not to intensity. Our results indicate that fluent readers at risk for dyslexia, poor readers at risk for dyslexia and fluent reading controls have an LDN to changes in ART and frequency, though the scalp activation of frequency processing was different for familial risk children. On intensity, only controls showed an LDN. Contrary to previous findings, our results suggest that neither ART nor frequency processing is related to reading fluency. Furthermore, our results imply that diminished sensitivity to changes in intensity and differential lateralization of frequency processing should be regarded as correlates of being at familial risk for dyslexia, that do not directly relate to reading fluency.

  18. Synaptic proteome changes in mouse brain regions upon auditory discrimination learning.

    Science.gov (United States)

    Kähne, Thilo; Kolodziej, Angela; Smalla, Karl-Heinz; Eisenschmidt, Elke; Haus, Utz-Uwe; Weismantel, Robert; Kropf, Siegfried; Wetzel, Wolfram; Ohl, Frank W; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D

    2012-08-01

    Changes in synaptic efficacy underlying learning and memory processes are assumed to be associated with alterations of the protein composition of synapses. Here, we performed a quantitative proteomic screen to monitor changes in the synaptic proteome of four brain areas (auditory cortex, frontal cortex, hippocampus striatum) during auditory learning. Mice were trained in a shuttle box GO/NO-GO paradigm to discriminate between rising and falling frequency modulated tones to avoid mild electric foot shock. Control-treated mice received corresponding numbers of either the tones or the foot shocks. Six hours and 24 h later, the composition of a fraction enriched in synaptic cytomatrix-associated proteins was compared to that obtained from naïve mice by quantitative mass spectrometry. In the synaptic protein fraction obtained from trained mice, the average percentage (±SEM) of downregulated proteins (59.9 ± 0.5%) exceeded that of upregulated proteins (23.5 ± 0.8%) in the brain regions studied. This effect was significantly smaller in foot shock (42.7 ± 0.6% down, 40.7 ± 1.0% up) and tone controls (43.9 ± 1.0% down, 39.7 ± 0.9% up). These data suggest that learning processes initially induce removal and/or degradation of proteins from presynaptic and postsynaptic cytoskeletal matrices before these structures can acquire a new, postlearning organisation. In silico analysis points to a general role of insulin-like signalling in this process.

  19. Effects of Electrode Position on Spatiotemporal Auditory Nerve Fiber Responses: A 3D Computational Model Study

    Directory of Open Access Journals (Sweden)

    Soojin Kang

    2015-01-01

    Full Text Available A cochlear implant (CI is an auditory prosthesis that enables hearing by providing electrical stimuli through an electrode array. It has been previously established that the electrode position can influence CI performance. Thus, electrode position should be considered in order to achieve better CI results. This paper describes how the electrode position influences the auditory nerve fiber (ANF response to either a single pulse or low- (250 pulses/s and high-rate (5,000 pulses/s pulse-trains using a computational model. The field potential in the cochlea was calculated using a three-dimensional finite-element model, and the ANF response was simulated using a biophysical ANF model. The effects were evaluated in terms of the dynamic range, stochasticity, and spike excitation pattern. The relative spread, threshold, jitter, and initiated node were analyzed for single-pulse response; and the dynamic range, threshold, initiated node, and interspike interval were analyzed for pulse-train stimuli responses. Electrode position was found to significantly affect the spatiotemporal pattern of the ANF response, and this effect was significantly dependent on the stimulus rate. We believe that these modeling results can provide guidance regarding perimodiolar and lateral insertion of CIs in clinical settings and help understand CI performance.

  20. Broadband onset inhibition can suppress spectral splatter in the auditory brainstem.

    Directory of Open Access Journals (Sweden)

    Martin J Spencer

    Full Text Available In vivo intracellular responses to auditory stimuli revealed that, in a particular population of cells of the ventral nucleus of the lateral lemniscus (VNLL of rats, fast inhibition occurred before the first action potential. These experimental data were used to constrain a leaky integrate-and-fire (LIF model of the neurons in this circuit. The post-synaptic potentials of the VNLL cell population were characterized using a method of triggered averaging. Analysis suggested that these inhibited VNLL cells produce action potentials in response to a particular magnitude of the rate of change of their membrane potential. The LIF model was modified to incorporate the VNLL cells' distinctive action potential production mechanism. The model was used to explore the response of the population of VNLL cells to simple speech-like sounds. These sounds consisted of a simple tone modulated by a saw tooth with exponential decays, similar to glottal pulses that are the repeated impulses seen in vocalizations. It was found that the harmonic component of the sound was enhanced in the VNLL cell population when compared to a population of auditory nerve fibers. This was because the broadband onset noise, also termed spectral splatter, was suppressed by the fast onset inhibition. This mechanism has the potential to greatly improve the clarity of the representation of the harmonic content of certain kinds of natural sounds.

  1. Automaticity and primacy of auditory streaming: Concurrent subjective and objective measures.

    Science.gov (United States)

    Billig, Alexander J; Carlyon, Robert P

    2016-03-01

    Two experiments used subjective and objective measures to study the automaticity and primacy of auditory streaming. Listeners heard sequences of "ABA-" triplets, where "A" and "B" were tones of different frequencies and "-" was a silent gap. Segregation was more frequently reported, and rhythmically deviant triplets less well detected, for a greater between-tone frequency separation and later in the sequence. In Experiment 1, performing a competing auditory task for the first part of the sequence led to a reduction in subsequent streaming compared to when the tones were attended throughout. This is consistent with focused attention promoting streaming, and/or with attention switches resetting it. However, the proportion of segregated reports increased more rapidly following a switch than at the start of a sequence, indicating that some streaming occurred automatically. Modeling ruled out a simple "covert attention" account of this finding. Experiment 2 required listeners to perform subjective and objective tasks concurrently. It revealed superior performance during integrated compared to segregated reports, beyond that explained by the codependence of the two measures on stimulus parameters. We argue that listeners have limited access to low-level stimulus representations once perceptual organization has occurred, and that subjective and objective streaming measures partly index the same processes.

  2. Using EEG and stimulus context to probe the modelling of auditory-visual speech.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2016-02-01

    We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form.

  3. Lateral inhibition during nociceptive processing

    DEFF Research Database (Denmark)

    Quevedo, Alexandre S.; Mørch, Carsten Dahl; Andersen, Ole Kæseler

    2017-01-01

    of skin. Thus, the stimulation of the skin region between the endpoints of the lines appears to produce inhibition. These findings indicate that lateral inhibition limits spatial summation of pain and is an intrinsic component of nociceptive information processing. Disruption of such lateral inhibition......Spatial summation of pain is the increase of perceived intensity that occurs as the stimulated area increases. Spatial summation of pain is sub-additive in that increasing the stimulus area produces a disproportionately small increase in the perceived intensity of pain. A possible explanation...... for sub-additive summation may be that convergent excitatory information is modulated by lateral inhibition. To test the hypothesis that lateral inhibition may limit spatial summation of pain, we delivered different patterns of noxious thermal stimuli to the abdomens of 15 subjects using a computer...

  4. Lateral gene transfer, rearrangement, reconciliation

    NARCIS (Netherlands)

    Patterson, M.D.; Szollosi, G.; Daubin, V.; Tannier, E.

    2013-01-01

    Background. Models of ancestral gene order reconstruction have progressively integrated different evolutionary patterns and processes such as unequal gene content, gene duplications, and implicitly sequence evolution via reconciled gene trees. These models have so far ignored lateral gene transfer,

  5. Diagnosing and treating lateral epicondylitis.

    OpenAIRE

    1994-01-01

    Lateral epicondylitis is often encountered in primary care. Although its diagnosis can be fairly straightforward, its treatment is often difficult. This review examines the epidemiology, pathophysiology, and clinical presentation of tennis elbow. Management options are discussed.

  6. Cerebral Laterality and Verbal Processes

    Science.gov (United States)

    Sherman, Jay L.; And Others

    1976-01-01

    Research suggests that we process information by way of two distinct and functionally separate coding systems. Their location, somewhat dependent on cerebral laterality, varies in right- and left-handed persons. Tests this dual coding model. (Editor/RK)

  7. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  8. Behavioral modulation of neural encoding of click-trains in the primary and nonprimary auditory cortex of cats.

    Science.gov (United States)

    Dong, Chao; Qin, Ling; Zhao, Zhenling; Zhong, Renjia; Sato, Yu

    2013-08-07

    Neural representation of acoustic stimuli in the mammal auditory cortex (AC) has been extensively studied using anesthetized or awake nonbehaving animals. Recently, several studies have shown that active engagement in an auditory behavioral task can substantially change the neuron response properties compared with when animals were passively listening to the same sounds; however, these studies mainly investigated the effect of behavioral state on the primary auditory cortex and the reported effects were inconsistent. Here, we examined the single-unit spike activities in both the primary and nonprimary areas along the dorsal-to-ventral direction of the cat's AC, when the cat was actively discriminating click-trains at different repetition rates and when it was passively listening to the same stimuli. We found that the changes due to task engagement were heterogeneous in the primary AC; some neurons showed significant increases in driven firing rate, others showed decreases. But in the nonprimary AC, task engagement predominantly enhanced the neural responses, resulting in a substantial improvement of the neural discriminability of click-trains. Additionally, our results revealed that neural responses synchronizing to click-trains gradually decreased along the dorsal-to-ventral direction of cat AC, while nonsynchronizing responses remained less changed. The present study provides new insights into the hierarchical organization of AC along the dorsal-to-ventral direction and highlights the importance of using behavioral animals to investigate the later stages of cortical processing.

  9. A rare cause of conductive hearing loss: High lateralized jugular bulb with bony dehiscence.

    Science.gov (United States)

    Barr, James G; Singh, Pranay K

    2016-06-01

    We present a rare case of pediatric conductive hearing loss due to a high lateralized jugular bulb. An 8-year-old boy with a right-sided conductive hearing loss of 40 dB was found to have a pink bulge toward the inferior part of the right eardrum. Computed tomography showed a high, lateralized right jugular bulb that had a superolaterally pointing diverticulum that bulged into the lower mesotympanum and posterior external auditory meatus. It was explained to the child's parents that it is important never to put any sharp objects into the ears because of the risk of injury to the jugular vein. A high, lateralized jugular bulb with a diverticulum is a rare anatomic abnormality. Correct diagnosis of this abnormality is important so that inappropriate intervention does not occur.

  10. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features

    Directory of Open Access Journals (Sweden)

    Jancke Lutz

    2007-12-01

    Full Text Available Abstract Background Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT is considered an important cue that is cardinal for phonetic perception. Methods In the present study, we recorded and compared scalp auditory evoked potentials (AEP in response to consonant-vowel-syllables (CV with varying voice-onset-times (VOT and non-speech analogues with varying noise-onset-time (NOT. In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. Results Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds. Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. Conclusion The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal

  11. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia.

    Science.gov (United States)

    Kuga, Hironori; Onitsuka, Toshiaki; Hirano, Yoji; Nakamura, Itta; Oribe, Naoya; Mizuhara, Hiroaki; Kanai, Ryota; Kanba, Shigenobu; Ueno, Takefumi

    2016-10-01

    Recent MRI studies have shown that schizophrenia is characterized by reductions in brain gray matter, which progress in the acute state of the disease. Cortical circuitry abnormalities in gamma oscillations, such as deficits in the auditory steady state response (ASSR) to gamma frequency (>30-Hz) stimulation, have also been reported in schizophrenia patients. In the current study, we investigated neural responses during click stimulation by BOLD signals. We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ), 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ), and 24 healthy controls (HC), assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  12. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  13. Surgical Procedures for External Auditory Canal Carcinoma and the Preservation of Postoperative Hearing

    Directory of Open Access Journals (Sweden)

    Hiroshi Hoshikawa

    2012-01-01

    Full Text Available Carcinoma of the external auditory canal (EAC is an unusual head and neck malignancy. The pathophysiology of these tumors is different from other skin lesions because of their anatomical and functional characteristics. Early-stage carcinoma of the EAC can be generally cured by surgical treatment, and reconstruction of the EAC with a tympanoplasty can help to retain hearing, thus improving the patients’ quality of life. In this study, we present two cases of early-stage carcinoma of the EAC treated by canal reconstruction using skin grafts after lateral temporal bone resection. A rolled-up skin graft with a temporal muscle flap was useful for keeping the form and maintaining the postoperative hearing. An adequate size of the skin graft and blood supply to the graft bed are important for achieving a successful operation.

  14. Auditory place theory and frequency difference limen

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jialu

    2006-01-01

    It has been a barrier that the place code is far too coarse a mechanism to account for the finest frequency difference limen for place theory of hearing since it was proposed in 19th century. A place correlation model, which takes the energy distribution of a pure tone in neighboring bands of auditory filters into full account, was presented in this paper. The model based on the place theory and some experimental results of the psychophysical tuning curves of hearing can explain the finest difference limen for frequency (about 0.02 or 0.3% at 1000 Hz)easily. Using a standard 1/3 octave filter bank of which the relationship between the frequency of a input pure tone apart from the centre frequency of K-th filter band, △f, and the output intensity difference between K-th and (K + 1)-th filters, △E, was established in order to show the fine frequency detection ability of the filter bank. This model can also be used to abstract the fundamental frequency of speech and to measure the frequency of pure tone precisely.

  15. Theory of Auditory Thresholds in Primates

    Science.gov (United States)

    Harrison, Michael J.

    2001-03-01

    The influence of thermal pressure fluctuations at the tympanic membrane has been previously investigated as a possible determinant of the threshold of hearing in humans (L.J. Sivian and S.D. White, J. Acoust. Soc. Am. IV, 4;288(1933).). More recent work has focussed more precisely on the relation between statistical mechanics and sensory signal processing by biological means in creatures' brains (W. Bialek, in ``Physics of Biological Systems: from molecules to species'', H. Flyvberg et al, (Eds), p. 252; Springer 1997.). Clinical data on the frequency dependence of hearing thresholds in humans and other primates (W.C. Stebbins, ``The Acoustic Sense of Animals'', Harvard 1983.) has long been available. I have derived an expression for the frequency dependence of hearing thresholds in primates, including humans, by first calculating the frequency dependence of thermal pressure fluctuations at eardrums from damped normal modes excited in model ear canals of given simple geometry. I then show that most of the features of the clinical data are directly related to the frequency dependence of the ratio of thermal noise pressure arising from without to that arising from within the masking bandwidth which signals must dominate in order to be sensed. The higher intensity of threshold signals in primates smaller than humans, which is clinically observed over much but not all of the human auditory spectrum is shown to arise from their smaller meatus dimensions. note

  16. Elastic modulus of cetacean auditory ossicles.

    Science.gov (United States)

    Tubelli, Andrew A; Zosuls, Aleks; Ketten, Darlene R; Mountain, David C

    2014-05-01

    In order to model the hearing capabilities of marine mammals (cetaceans), it is necessary to understand the mechanical properties, such as elastic modulus, of the middle ear bones in these species. Biologically realistic models can be used to investigate the biomechanics of hearing in cetaceans, much of which is currently unknown. In the present study, the elastic moduli of the auditory ossicles (malleus, incus, and stapes) of eight species of cetacean, two baleen whales (mysticete) and six toothed whales (odontocete), were measured using nanoindentation. The two groups of mysticete ossicles overall had lower average elastic moduli (35.2 ± 13.3 GPa and 31.6 ± 6.5 GPa) than the groups of odontocete ossicles (53.3 ± 7.2 GPa to 62.3 ± 4.7 GPa). Interior bone generally had a higher modulus than cortical bone by up to 36%. The effects of freezing and formalin-fixation on elastic modulus were also investigated, although samples were few and no clear trend could be discerned. The high elastic modulus of the ossicles and the differences in the elastic moduli between mysticetes and odontocetes are likely specializations in the bone for underwater hearing.

  17. Structured Counseling for Auditory Dynamic Range Expansion.

    Science.gov (United States)

    Gold, Susan L; Formby, Craig

    2017-02-01

    A structured counseling protocol is described that, when combined with low-level broadband sound therapy from bilateral sound generators, offers audiologists a new tool for facilitating the expansion of the auditory dynamic range (DR) for loudness. The protocol and its content are specifically designed to address and treat problems that impact hearing-impaired persons who, due to their reduced DRs, may be limited in the use and benefit of amplified sound from hearing aids. The reduced DRs may result from elevated audiometric thresholds and/or reduced sound tolerance as documented by lower-than-normal loudness discomfort levels (LDLs). Accordingly, the counseling protocol is appropriate for challenging and difficult-to-fit persons with sensorineural hearing losses who experience loudness recruitment or hyperacusis. Positive treatment outcomes for individuals with the former and latter conditions are highlighted in this issue by incremental shifts (improvements) in LDL and/or categorical loudness judgments, associated reduced complaints of sound intolerance, and functional improvements in daily communication, speech understanding, and quality of life leading to improved hearing aid benefit, satisfaction, and aided sound quality, posttreatment.

  18. Auditory free classification of nonnative speech

    Science.gov (United States)

    Atagi, Eriko; Bent, Tessa

    2013-01-01

    Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task—a task in which listeners freely group talkers based on audio samples—has been a useful tool for examining listeners’ representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers’ native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented—such as the listeners’ attention to the talkers’ native language and the variability of stimulus intelligibility—can influence listeners’ perceptual organization of nonnative speech. PMID:24363470

  19. Cholesteatoma invasion into the internal auditory canal.

    Science.gov (United States)

    Migirov, Lela; Bendet, Erez; Kronenberg, Jona

    2009-05-01

    Cholesteatoma invasion into the internal auditory canal (IAC) is rare and usually results in irreversible, complete hearing loss and facial paralysis on the affected side. This retrospective study examines the clinical characteristics of seven patients with cholesteatoma invading the IAC, analyzes possible routes of the cholesteatoma's extension and describes the surgical approaches used and patient outcome. Extension to the IAC was via the supralabyrinthine route in most patients. A subtotal petrosectomy, a translabyrinthine approach or a middle cranial fossa approach combined with radical mastoidectomy were required for the complete removal of the cholesteatoma. All seven patients presented with some preoperative facial nerve palsy. The facial nerve was decompressed in four patients and facial nerve repair was performed in three others, two by hypoglossal-facial anastomosis and one by a greater auricular nerve interposition grafting. All patients ended up with total deafness in the operate ear. At 1 year following surgery, the facial nerve function was House-Brackmann grade III in six cases and grade II in one. In conclusion, cholesteatoma invading the IAC is a separate entity with characteristic clinical presentations, require a unique surgical approach, and result in significant morbidity, such as total deafness in the operated ear and impaired facial movement.

  20. Theta oscillations accompanying concurrent auditory stream segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Urbán, Gábor; Winkler, István

    2016-08-01

    The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.

  1. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  2. Functional lateralization of speech processing in adults and children who stutter

    Directory of Open Access Journals (Sweden)

    Yutaka eSato

    2011-04-01

    Full Text Available Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy (NIRS. We used the analysis-resynthesis technique to prepare two types of stimuli: (i a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/ and (ii a prosodic contrast (/itta/ vs. /itta?/. In the baseline blocks, only /itta/ tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering.

  3. Hierarchical effects of task engagement on amplitude modulation encoding in auditory cortex.

    Science.gov (United States)

    Niwa, Mamiko; O'Connor, Kevin N; Engall, Elizabeth; Johnson, Jeffrey S; Sutter, M L

    2015-01-01

    We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a "single mode" in A1 that relies on increased activity for AM relative to unmodulated sounds and a "dual-polar mode" in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.

  4. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  5. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  6. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  7. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  8. The Effect of Neonatal Hyperbilirubinemia on the Auditory System

    Directory of Open Access Journals (Sweden)

    Dr. Zahra Jafari

    2008-12-01

    Full Text Available Background and Aim: Hyperbilirubinemia during the neonatal period is known to be an important risk factor for neonatal auditory impairment, and may reveal as a permanent brain damage, if no proper therapeutic intervention is considered. In the present study some electroacoustic and electrophysiologic tests were used to evaluate function of auditory system in a group of children with severe neonatal Jaundice. Materials and Methods: Forty five children with mean age of 16.1 14.81 months and 17 mg/dl and higher bilirubin level were studied, and the transient evoked otoacoustic emission, acoustic reflex, auditory brainstem response and auditory steady-state response tests were performed for them. Results: The mean score of bilirubin was 29.37 8.95 mg/dl. It was lower than 20 mg/dl in 22.2%, between 20-30 mg/dl in 24.4% and more than 30 mg/dl in 48.0% of children. No therapeutic intervention in 26.7%, phototherapy in 44.4%, and blood exchange in 28.9% of children were reported. 48.9% hypoxia and 26.6% preterm birth history was shown too. TEOAEs was recordable in 71.1% of cases. The normal result in acoustic reflex, ABR and ASSR tests was shown just in 11.1% of cases. The clinical symptoms of auditory neuropathy were revealed in 57.7% of children. Conclusion: Conducting auditory tests sensitive to hyperbilirubinemia place of injury is necessary to inform from functional effect and severity of disorder. Because the auditory neuropathy/ dys-synchrony is common in neonates with hyperbilirubinemic, the OAEs and ABR are the minimum essential tests to identify this disorder.

  9. Optineurin and amyotrophic lateral sclerosis.

    Science.gov (United States)

    Maruyama, Hirofumi; Kawakami, Hideshi

    2013-07-01

    Amyotrophic lateral sclerosis is a devastating disease, and thus it is important to identify the causative gene and resolve the mechanism of the disease. We identified optineurin as a causative gene for amyotrophic lateral sclerosis. We found three types of mutations: a homozygous deletion of exon 5, a homozygous Q398X nonsense mutation and a heterozygous E478G missense mutation within its ubiquitin-binding domain. Optineurin negatively regulates the tumor necrosis factor-α-induced activation of nuclear factor kappa B. Nonsense and missense mutations abolished this function. Mutations related to amyotrophic lateral sclerosis also negated the inhibition of interferon regulatory factor-3. The missense mutation showed a cyotoplasmic distribution different from that of the wild type. There are no specific clinical symptoms related to optineurin. However, severe brain atrophy was detected in patients with homozygous deletion. Neuropathologically, an E478G patient showed transactive response DNA-binding protein of 43 kDa-positive neuronal intracytoplasmic inclusions in the spinal and medullary motor neurons. Furthermore, Golgi fragmentation was identified in 73% of this patient's anterior horn cells. In addition, optineurin is colocalized with fused in sarcoma in the basophilic inclusions of amyotrophic lateral sclerosis with fused in sarcoma mutations, and in basophilic inclusion body disease. These findings strongly suggest that optineurin is involved in the pathogenesis of amyotrophic lateral sclerosis.

  10. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity.

    Science.gov (United States)

    Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J

    2017-03-01

    Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how

  11. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  12. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking.

    Science.gov (United States)

    Chang, An-Chieh; Lutfi, Robert; Lee, Jungmee; Heo, Inseok

    2016-09-18

    Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters). A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  13. Lateral epicondylitis of the elbow.

    Science.gov (United States)

    Tosti, Rick; Jennings, John; Sewards, J Milo

    2013-04-01

    Lateral epicondylitis, or "tennis elbow," is a common musculotendinous degenerative disorder of the extensor origin at the lateral humeral epicondyle. Repetitive occupational or athletic activities involving wrist extension and supination are thought to be causative. The typical symptoms include lateral elbow pain, pain with wrist extension, and weakened grip strength. The diagnosis is made clinically through history and physical examination; however, a thorough understanding of the differential diagnosis is imperative to prevent unnecessary testing and therapies. Most patients improve with nonoperative measures, such as activity modification, physical therapy, and injections. A small percentage of patients will require surgical release of the extensor carpi radialis brevis tendon. Common methods of release may be performed via percutaneous, arthroscopic, or open approaches.

  14. Modeling mechanisms that contribute to the precedence effect: From auditory periphery to midbrain

    Science.gov (United States)

    Xia, Jing

    The precedence effect (PE) describes a perceptual phenomenon whereby a pair of temporally close clicks from different directions is perceived as coming from a location near that of the first-arriving sound. The objective of this thesis is to build a physiologically plausible model that predicts perceptual aspects of the PE. The project explores different mechanisms that may contribute to the PE at different levels of the auditory system. The roles of peripheral processing and frequency dominance on the PE were explored by modeling the auditory nerve fiber and using a binaural, cross-correlation model whose outputs were weighted across frequency to predict perceived location. New behavioral results confirmed model predictions that (1) lateralization of narrowband clicks is strongly influenced by the stimulus center frequency and the inter-stimulus delay (ISD) between leading and lagging clicks, and (2) decrements in the leading click level influence lateralization of wideband clicks differently at different ISDs. The role of adaptation was explored by modeling neurons in the cochlear nucleus and the medial superior olive (MSO), both of which are important in computing the localization cues of the auditory stimuli. Simulation results indicated that low-threshold potassium currents (a form of fast adaptation) can prevent jittery, subthreshold inputs from accumulating, thus enhancing synchronization. Synaptic depression (a form of slow adaptation) can produce a sustained decline of the responses after accurately encoding the stimulus onset. The role of long-lasting inhibition was explored by modeling inferior colliculus neurons with inhibitory inputs from both ipsilateral and contralateral MSOs. Psychophysical predictions were generated from a population of model neurons. The model simulated how the physiological suppression of the lagging response depends on the ISD and relative lead and lag locations, as well as behavioral results showing that the perceived location

  15. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  16. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    Science.gov (United States)

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  17. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  18. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  19. The use of visual stimuli during auditory assessment.

    Science.gov (United States)

    Pearlman, R C; Cunningham, D R; Williamson, D G; Amerman, J D

    1975-01-01

    Two groups of male subjects beyond 50 years of age were given audiometric tasks with and without visual stimulation to determine if visual stimuli changed auditory perception. The first group consisted of 10 subjects with normal auditory acuity; the second, 10 with sensorineural hearing losses greater than 30 decibels. The rate of presentation of the visual stimuli, consisting of photographic slides of various subjects, was determined in experiment I of the study. The subjects, while viewing the slides at their own rate, took an audiotry speech discrimination test. Advisedly they changed the slides at a speed which they felt facilitated attention while performing the auditory task. The mean rate of slide-changing behavior was used as the "optimum" visual stimulation rate in experiment II, which was designed to explore the interaction of the bisensory presentation of stimuli. Bekesy tracings and Rush Hughes recordings were administered without and with visual stimuli, the latter presented at the mean rate of slide changes found in experiment I. Analysis of data indicated that (1) no statistically significant difference exists between visual and nonvisual conditions during speech discrimination and Bekesy testing; and (2) subjects did not believe that visual stimuli as presented in this study helped them to listen more effectively. The experimenter concluded that the various auditory stimuli encountered in the auditory test situation may actually be a deterrent to boredom because of the variety of tasks required in a testing situation.

  20. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  1. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  2. Brainstem auditory evoked potentials in children with lead exposure

    Directory of Open Access Journals (Sweden)

    Katia de Freitas Alvarenga

    2015-02-01

    Full Text Available Introduction: Earlier studies have demonstrated an auditory effect of lead exposure in children, but information on the effects of low chronic exposures needs to be further elucidated. Objective: To investigate the effect of low chronic exposures of the auditory system in children with a history of low blood lead levels, using an auditory electrophysiological test. Methods: Contemporary cross-sectional cohort. Study participants underwent tympanometry, pure tone and speech audiometry, transient evoked otoacoustic emissions, and brainstem auditory evoked potentials, with blood lead monitoring over a period of 35.5 months. The study included 130 children, with ages ranging from 18 months to 14 years, 5 months (mean age 6 years, 8 months ± 3 years, 2 months. Results: The mean time-integrated cumulative blood lead index was 12 µg/dL (SD ± 5.7, range:2.433. All participants had hearing thresholds equal to or below 20 dBHL and normal amplitudes of transient evoked otoacoustic emissions. No association was found between the absolute latencies of waves I, III, and V, the interpeak latencies I-III, III-V, and I-V, and the cumulative lead values. Conclusion: No evidence of toxic effects from chronic low lead exposures was observed on the auditory function of children living in a lead contaminated area.

  3. Enhanced representation of spectral contrasts in the primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Nicolas eCatz

    2013-06-01

    Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.

  4. Prevalence of auditory changes in newborns in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Guimarães, Valeriana de Castro

    2012-01-01

    Full Text Available Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4% had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5% children who had appeared in the retest, 8 (30.8% had remained with absence and had been directed to the Otolaryngologist. Five (55.5% had appeared and had been examined by the doctor. Of these, 3 (75.0% had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE. Of the total of studied children, 198 (87.6% had had presence of emissions in one of the tests and, 2 (0.9% with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection.

  5. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  6. The auditory attention status in Iranian bilingual and monolingual people

    Directory of Open Access Journals (Sweden)

    Nayiere Mansoori

    2013-05-01

    Full Text Available Background and Aim: Bilingualism, as one of the discussing issues of psychology and linguistics, can influence the speech processing. Of several tests for assessing auditory processing, dichotic digit test has been designed to study divided auditory attention. Our study was performed to compare the auditory attention between Iranian bilingual and monolingual young adults. Methods: This cross-sectional study was conducted on 60 students including 30 Turkish-Persian bilinguals and 30 Persian monolinguals aged between 18 to 30 years in both genders. Dichotic digit test was performed on young individuals with normal peripheral hearing and right hand preference. Results: No significant correlation was found between the results of dichotic digit test of monolinguals and bilinguals (p=0.195, and also between the results of right and left ears in monolingual (p=0.460 and bilingual (p=0.054 groups. The mean score of women was significantly more than men (p=0.031. Conclusion: There was no significant difference between bilinguals and monolinguals in divided auditory attention; and it seems that acquisition of second language in lower ages has no noticeable effect on this type of auditory attention.

  7. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  8. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  9. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    Science.gov (United States)

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development.

  10. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Frederic Venail

    2015-01-01

    Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  11. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants.

    Science.gov (United States)

    Venail, Frederic; Mura, Thibault; Akkari, Mohamed; Mathiolon, Caroline; Menjot de Champfleur, Sophie; Piron, Jean Pierre; Sicard, Marielle; Sterkers-Artieres, Françoise; Mondain, Michel; Uziel, Alain

    2015-01-01

    The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement), electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device). The electrical response, measured using auto-NRT (neural responses telemetry) algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = -0.11 ± 0.02, P < 0.01), the scalar placement of the electrodes (β = -8.50 ± 1.97, P < 0.01), and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF). Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  12. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  13. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  14. Speech identification and cortical potentials in individuals with auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Vanaja CS

    2008-03-01

    Full Text Available Abstract Background Present study investigated the relationship between speech identification scores in quiet and parameters of cortical potentials (latency of P1, N1, and P2; and amplitude of N1/P2 in individuals with auditory neuropathy. Methods Ten individuals with auditory neuropathy (five males and five females and ten individuals with normal hearing in the age range of 12 to 39 yr participated in the study. Speech identification ability was assessed for bi-syllabic words and cortical potentials were recorded for click stimuli. Results Results revealed that in individuals with auditory neuropathy, speech identification scores were significantly poorer than that of individuals with normal hearing. Individuals with auditory neuropathy were further classified into two groups, Good Performers and Poor Performers based on their speech identification scores. It was observed that the mean amplitude of N1/P2 of Poor Performers was significantly lower than that of Good Performers and those with normal hearing. There was no significant effect of group on the latency of the peaks. Speech identification scores showed a good correlation with the amplitude of cortical potentials (N1/P2 complex but did not show a significant correlation with the latency of cortical potentials. Conclusion Results of the present study suggests that measuring the cortical potentials may offer a means for predicting perceptual skills in individuals with auditory neuropathy.

  15. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  16. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  17. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  18. Auditory hair cell innervational patterns in lizards.

    Science.gov (United States)

    Miller, M R; Beck, J

    1988-05-22

    The pattern of afferent and efferent innervation of two to four unidirectional (UHC) and two to nine bidirectional (BHC) hair cells of five different types of lizard auditory papillae was determined by reconstruction of serial TEM sections. The species studies were Crotaphytus wislizeni (iguanid), Podarcis (Lacerta) sicula and P. muralis (lacertids), Ameiva ameiva (teiid), Coleonyx variegatus (gekkonid), and Mabuya multifasciata (scincid). The main object was to determine in which species and in which hair cell types the nerve fibers were innervating only one (exclusive innervation), or two or more hair cells (nonexclusive innervation); how many nerve fibers were supplying each hair cell; how many synapses were made by the innervating fibers; and the total number of synapses on each hair cell. In the species studies, efferent innervation was limited to the UHC, and except for the iguanid, C. wislizeni, it was nonexclusive, each fiber supplying two or more hair cells. Afferent innervation varied both with the species and the hair cell types. In Crotaphytus, both the UHC and the BHC were exclusively innervated. In Podarcis and Ameiva, the UHC were innervated exclusively by some fibers but nonexclusively by others (mixed pattern). In Coleonyx, the UHC were exclusively innervated but the BHC were nonexclusively innervated. In Mabuya, both the UHC and BHC were nonexclusively innervated. The number of afferent nerve fibers and the number of afferent synapses were always larger in the UHC than in the BHC. In Ameiva, Podarcis, and Mabuya, groups of bidirectionally oriented hair cells occur in regions of cytologically distinct UHC, and in Ameiva, unidirectionally oriented hair cells occur in cytologically distinct BHC regions.

  19. Pollute first, clean up later?

    NARCIS (Netherlands)

    Azadi, Hossein; Verheijke, Gijs; Witlox, Frank

    2011-01-01

    There is a growing concern with regard to sustainability in emerging economies like China. The Chinese growth is characterized by a strategy which is known as "pollute first, clean up later". Here we show that based on this strategy, the pollution alarm can often be postponed by a tremendous economi

  20. Later Zhou Sejong's Cultural Policy

    Institute of Scientific and Technical Information of China (English)

    PAN Qing

    2015-01-01

    Sejong wanted to stabilize the control. He paid attention to strengthen cultural enlightenment and implement cultural policy from educating people, choosing capable person, repairing history, limiting Buddhism, respecting Confucianism and other aspects. The wind of literature rise gradually. It is conducive to research the developmental trajectory of Later Zhou Dynasty.

  1. [Response characteristics of neurons to tone in dorsal nucleus of the lateral lemniscus of the mouse].

    Science.gov (United States)

    Si, Wen-Juan; Cheng, Yan-Ling; Yang, Dan-Dan; Wang, Xin

    2016-02-25

    The dorsal nucleus of lateral lemniscus (DNLL) is a nucleus in the auditory ascending pathway, and casts inhibitory efferent projections to the inferior colliculus. Studies on the DNLL are less than studies on the auditory brain stem and inferior colliculus. To date, there is no information about response characteristics of neurons in DNLL of albino mouse. Under free field conditions, we used extracellular single unit recording to study the acoustic signal characteristics of DNLL neurons in Kunming mice (Mus musculus). Transient (36%) and ongoing (64%) firing patterns were found in 96 DNLL neurons. Neurons with different firing patterns have significant differences in characteristic frequency and minimal threshold. We recorded frequency tuning curves (FTCs) of 87 DNLL neurons. All of the FTCs exhibit an open "V" shape. There is no significant difference in FTCs between transient and ongoing neurons, but among the ongoing neurons, the FTCs of sustained neurons are sharper than those of onset plus sustained neurons and pauser neurons. Our results showed that the characteristic frequency of DNLL neurons of mice was not correlated with depth, supporting the view that the DNLL of mouse has no frequency topological organization through dorsal-ventral plane, which is different from cats and some other animals. Furthermore, by using rate-intensity function (RIF) analysis the mouse DNLL neurons can be classified as monotonic (60%), saturated (31%) and non-monotonic (8%) types. Each RIF type includes transient and ongoing firing patterns. Dynamic range of the transient firing pattern is smaller than that of ongoing firing ones (P transient firing pattern. Multiple firing patterns and intensity coding of DNLL neurons may derive from the projections from multiple auditory nuclei, and play different roles in auditory information processing.

  2. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  3. Preliminary Studies on Differential Expression of Auditory Functional Genes in the Brain After Repeated Blast Exposures

    Science.gov (United States)

    2012-01-01

    Army Medical Research and Materiel Command, Fort Detrick, MD Abstract—The mechanisms of central auditory processing involved in auditory/ vestibular ...trans- ducers in auditory neurons [22–23,45–48]. The frontal cor- tex and midbrain of blast-exposed mice showed significant increase in the expression of...auditory neurons [26]. Other types of molecules involved in calcium regula- tion, such as calreticulin and calmodulin-dependent pro- tein kinase expression

  4. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    OpenAIRE

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascendin...

  5. Auditory Memory deficit in Elderly People with Hearing Loss

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2013-06-01

    Full Text Available Introduction: Hearing loss is one of the most common problems in elderly people. Functional side effects of hearing loss are various. Due to the fact that hearing loss is the common impairment in elderly people; the importance of its possible effects on auditory memory is undeniable. This study aims to focus on the hearing loss effects on auditory memory.   Materials and Methods: Dichotic Auditory Memory Test (DVMT was performed on 47 elderly people, aged 60 to 80; that were divided in two groups, the first group consisted of elderly people with hearing range of 24 normal and the second one consisted of 23 elderly people with bilateral symmetrical ranged from mild to moderate Sensorineural hearing loss in the high frequency due to aging in both genders.   Results: Significant difference was observed in DVMT between elderly people with normal hearing and those with hearing loss (P

  6. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; Röder, Brigitte;

    2013-01-01

    Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while...... they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG...

  7. Temporal resolution in the hearing system and auditory evoked potentials

    DEFF Research Database (Denmark)

    Miller, Lee; Beedholm, Kristian

    2008-01-01

    3pAB5. Temporal resolution in the hearing system and auditory evoked potentials. Kristian Beedholm Institute of Biology,University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark, beedholm@mail.dk, Lee A. Miller Institute of Biology,University of Southern Denmark, Campusvej 55, 5230...... Odense M, Denmark, lee@biology.sdu.dkA popular type of investigation with auditory evoked potentials AEP consists of mapping the dependency of the envelope followingresponse to the AM frequency. This results in what is called the modulation rate transfer function MRTF. The physiologicalinterpretation...... of the MRTF is not straight forward, but is often used as a measure of the ability of the auditory system to encodetemporal changes. It is, however, shown here that the MRTF must depend on the waveform of the click-evoked AEP ceAEP, whichdoes not relate directly to temporal resolution. The theoretical...

  8. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  9. Auditory aura in frontal opercular epilepsy: sounds from afar.

    Science.gov (United States)

    Thompson, Stephen A; Alexopoulos, Andreas; Bingaman, William; Gonzalez-Martinez, Jorge; Bulacio, Juan; Nair, Dileep; So, Norman K

    2015-06-01

    Auditory auras are typically considered to localize to the temporal neocortex. Herein, we present two cases of frontal operculum/perisylvian epilepsy with auditory auras. Following a non-invasive evaluation, including ictal SPECT and magnetoencephalography, implicating the frontal operculum, these cases were evaluated with invasive monitoring, using stereoelectroencephalography and subdural (plus depth) electrodes, respectively. Spontaneous and electrically-induced seizures showed an ictal onset involving the frontal operculum in both cases. A typical auditory aura was triggered by stimulation of the frontal operculum in one. Resection of the frontal operculum and subjacent insula rendered one case seizure- (and aura-) free. From a hodological (network) perspective, we discuss these findings with consideration of the perisylvian and insular network(s) interconnecting the frontal and temporal lobes, and revisit the non-invasive data, specifically that of ictal SPECT.

  10. Spontaneous synchronized tapping to an auditory rhythm in a chimpanzee.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2013-01-01

    Humans actively use behavioral synchrony such as dancing and singing when they intend to make affiliative relationships. Such advanced synchronous movement occurs even unconsciously when we hear rhythmically complex music. A foundation for this tendency may be an evolutionary adaptation for group living but evolutionary origins of human synchronous activity is unclear. Here we show the first evidence that a member of our closest living relatives, a chimpanzee, spontaneously synchronizes her movement with an auditory rhythm: After a training to tap illuminated keys on an electric keyboard, one chimpanzee spontaneously aligned her tapping with the sound when she heard an isochronous distractor sound. This result indicates that sensitivity to, and tendency toward synchronous movement with an auditory rhythm exist in chimpanzees, although humans may have expanded it to unique forms of auditory and visual communication during the course of human evolution.

  11. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  12. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  13. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    OpenAIRE

    Badcock, Nicholas A.; Petroula Mousikou; Yatin Mahajan; Peter de Lissa; Johnson Thie; Genevieve McArthur

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system ha...

  14. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion.

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  15. Relationship between Selected Auditory and Visual Receptive Skills and Academic Achievement.

    Science.gov (United States)

    Bryant, Lynda Carol

    To observe the relationship of auditory and visual receptive skills to achievement in reading, 80 eight-year-old children were given a diagnostic test battery which examined three receptive skills--attention to stimuli, discrimination, and memory--within three sensory modalities--auditory, visual, and auditory-visual. The control group consisted…

  16. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  17. Plasticity in tinnitus patients : a role for the efferent auditory system?

    NARCIS (Netherlands)

    Geven, Leontien I.; Koeppl, Christine; de Kleine, Emile; van Dijk, Pim

    2014-01-01

    Hypothesis: The role of the corticofugal efferent auditory system in the origin or maintenance of tinnitus is currently mostly overlooked. Changes in the balance between excitation and inhibition after an auditory trauma are likely to play a role in the origin of tinnitus. The efferent auditory syst

  18. Bilateral Mandibular Condylar Fractures with Associated External Auditory Canal Fractures and Otorrhagia

    OpenAIRE

    Dang, David

    2016-01-01

    A rare case of bilateral mandibular condylar fractures associated with bilateral external auditory canal fractures and otorrhagia is reported. The more severe external auditory canal fracture was present on the side of high condylar fracture, and the less severe external auditory canal fracture was ipsilateral to the condylar neck fracture. A mechanism of injury is proposed to account for such findings.

  19. Bilateral Mandibular Condylar Fractures with Associated External Auditory Canal Fractures and Otorrhagia.

    Science.gov (United States)

    Dang, David

    2007-01-01

    A rare case of bilateral mandibular condylar fractures associated with bilateral external auditory canal fractures and otorrhagia is reported. The more severe external auditory canal fracture was present on the side of high condylar fracture, and the less severe external auditory canal fracture was ipsilateral to the condylar neck fracture. A mechanism of injury is proposed to account for such findings.

  20. Older adults' recognition of bodily and auditory expressions of emotion.

    Science.gov (United States)

    Ruffman, Ted; Sullivan, Susan; Dittrich, Winand

    2009-09-01

    This study compared young and older adults' ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults' were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions).