WorldWideScience

Sample records for auditory organs

  1. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  2. Tonotopic organization of human auditory association cortex.

    Science.gov (United States)

    Cansino, S; Williamson, S J; Karron, D

    1994-11-07

    Neuromagnetic studies of responses in human auditory association cortex for tone burst stimuli provide evidence for a tonotopic organization. The magnetic source image for the 100 ms component evoked by the onset of a tone is qualitatively similar to that of primary cortex, with responses lying deeper beneath the scalp for progressively higher tone frequencies. However, the tonotopic sequence of association cortex in three subjects is found largely within the superior temporal sulcus, although in the right hemisphere of one subject some sources may be closer to the inferior temporal sulcus. The locus of responses for individual subjects suggests a progression across the cortical surface that is approximately proportional to the logarithm of the tone frequency, as observed previously for primary cortex, with the span of 10 mm for each decade in frequency being comparable for the two areas.

  3. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  4. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    Science.gov (United States)

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development.

  5. The structure and function of auditory chordotonal organs in insects.

    Science.gov (United States)

    Yack, Jayne E

    2004-04-15

    Insects are capable of detecting a broad range of acoustic signals transmitted through air, water, or solids. Auditory sensory organs are morphologically diverse with respect to their body location, accessory structures, and number of sensilla, but remarkably uniform in that most are innervated by chordotonal organs. Chordotonal organs are structurally complex Type I mechanoreceptors that are distributed throughout the insect body and function to detect a wide range of mechanical stimuli, from gross motor movements to air-borne sounds. At present, little is known about how chordotonal organs in general function to convert mechanical stimuli to nerve impulses, and our limited understanding of this process represents one of the major challenges to the study of insect auditory systems today. This report reviews the literature on chordotonal organs innervating insect ears, with the broad intention of uncovering some common structural specializations of peripheral auditory systems, and identifying new avenues for research. A general overview of chordotonal organ ultrastructure is presented, followed by a summary of the current theories on mechanical coupling and transduction in monodynal, mononematic, Type 1 scolopidia, which characteristically innervate insect ears. Auditory organs of different insect taxa are reviewed, focusing primarily on tympanal organs, and with some consideration to Johnston's and subgenual organs. It is widely accepted that insect hearing organs evolved from pre-existing proprioceptive chordotonal organs. In addition to certain non-neural adaptations for hearing, such as tracheal expansion and cuticular thinning, the chordotonal organs themselves may have intrinsic specializations for sound reception and transduction, and these are discussed. In the future, an integrated approach, using traditional anatomical and physiological techniques in combination with new methodologies in immunohistochemistry, genetics, and biophysics, will assist in

  6. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  7. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  8. Discovery of a lipid synthesising organ in the auditory system of an insect.

    Science.gov (United States)

    Lomas, Kathryn F; Greenwood, David R; Windmill, James F C; Jackson, Joseph C; Corfield, Jeremy; Parsons, Stuart

    2012-01-01

    Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.

  9. Similar structural dimensions in bushcricket auditory organs in spite of different foreleg size: consequences for auditory tuning.

    Science.gov (United States)

    Rössler, W; Kalmring, K

    1994-11-01

    The bushcricket species Decticus albifrons, Decticus verrucivorus and Pholidoptera griseoaptera (Tettigoniidae) belong to the same subfamily (Decticinae) but differ significantly in body size. In spite of the great differences in the dimensions of the forelegs, where the auditory organs are located, the most sensitive range of the hearing threshold lies between 6 and 25 kHz in each case. Only in the frequency range from 2 to 5 kHz and above 25 kHz, significant differences are present. The anatomy of the auditory receptor organs was compared quantitatively, using the techniques of semi-thin sectioning and computer-guided morphometry. The overall number of scolopidia and the length of the crista acustica differs in the three species, but the relative distribution of scolopidia along the crista acustica is very similar. Additionally, the scolopidia and their attachment structures (tectorial membrane, dorsal tracheal wall, cap cells) are of equal size at equivalent relative positions along the crista acustica. The results indicate that the constant relations and dimensions of corresponding structures within the cristae acusticae of the three species are responsible for the similarities in the tuning of the auditory thresholds.

  10. The evolutionary origin of auditory receptors in Tettigonioidea: the complex tibial organ of Schizodactylidae.

    Science.gov (United States)

    Strauss, Johannes; Lakes-Harlan, Reinhard

    2009-01-01

    Audition in insects is of polyphyletic origin. Tympanal ears derived from proprioceptive or vibratory receptor organs, but many questions of the evolution of insect auditory systems are still open. Despite the rather typical bauplan of the insect body, e.g., with a fixed number of segments, tympanal ears evolved at very different places, but only ensiferans have ears at the foreleg tibia, located in the tibial organ. The homology and monophyly of ensiferan ears is controversial, and no precursor organ was unambiguously identified for auditory receptors. The latter can only be identified by comparative study of recent atympanate taxa. These atympanate taxa are poorly investigated. In this paper, we report the neuroanatomy of the tibial organ of Comicus calcaris (Irish 1986), an atympanate Schizodactylid (splay-footed cricket). This representative of a Gondwana relict group has a tripartite sensory organ, homologous to tettigoniid ears. A comparison with morphology-based cladistic phylogeny indicates that the tripartite neuronal organization present in the majority of Tettigonioidea presumably preceded evolution of a hearing sense in the Tettigonioidea. Furthermore, the absence of a tripartite organ in Grylloidea argues against a monophyletic origin and homology of the cricket and katydid ears. The tracheal attachment of sensory neurons typical for ears of Tettigonioidea is present in C. calcaris and may have facilitated cooption for auditory function. The functional auditory organ was presumably formed in evolution by successive non-neural modifications of trachea and tympana. This first investigation of the neuroanatomy of Schizodactylidae suggests a non-auditory chordotonal organ as the precursor for auditory receptors of related tympanate taxa and adds evidence for the phylogenetic position of the group.

  11. Evoked response audiometry used in testing auditory organs of miners

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, T.; Klepacki, J.; Wagstyl, R.

    1980-01-01

    The evoked response audiometry method of testing hearing loss is presented and the results of comparative studies using subjective tonal audiometry and evoked response audiometry in tests of 56 healthy men with good hearing are discussed. The men were divided into three groups according to age and place of work: work place without increased noise; work place with noise and vibrations (at drilling machines); work place with noise and shocks (work at excavators in surface coal mines). The ERA-MKII audiometer produced by the Medelec-Amplaid firm was used. Audiometric threshhold curves for the three groups of tested men are given. At frequencies of 500, 1000 and 4000 Hz mean objective auditory threshhold was shifted by 4-9.5 dB in comparison to the subjective auditory threshold. (21 refs.) (In Polish)

  12. Tonotopic organization of auditory receptors of the bushcricket pholidoptera griseoaptera (Tettigoniidae, decticinae)

    Science.gov (United States)

    Stolting; Stumpner

    1998-11-01

    The peripheral and central tonotopy of auditory receptors of the bushcricket Pholidoptera griseoaptera is described. Out of 24 auditory receptor cells of the crista acustica 18 were identified by single-cell recordings in the prothoracic ganglion and complete staining with neurobiotin. Proximal receptor cells of the crista acustica were most sensitive to 6 kHz, with medial cells being sensitive to 20-30 kHz, whereas distal cells were most sensitive to frequencies higher than 50 kHz. Projection areas within the auditory neuropile in the prothoracic ganglion were to- notopically arranged. Proximal cells projected anteriorly, medial cells ventrally and posteriorly, and distal cells to more dorsal regions. Identified receptor cells revealed an interindividual variability of tuning and central projections. Receptor cells from the intermediate organ of a bushcricket were identified for the first time. Receptors of the distal intermediate organ were broadly tuned and less sensitive than those of the crista acustica. Receptor cells of the proximal intermediate organ were most sensitive to frequencies below 10 kHz. They projected in anterior portions of the auditory neuropile, whereas cells of the distal intermediate organ had terminations spread over almost the whole auditory neuropile.

  13. Discovery of a lipid synthesising organ in the auditory system of an insect.

    Directory of Open Access Journals (Sweden)

    Kathryn F Lomas

    Full Text Available Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.

  14. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling

    Directory of Open Access Journals (Sweden)

    Mengxue eCao

    2014-03-01

    Full Text Available Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic--semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners; a reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1 I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2 clear auditory and semantic boundaries can be found in the network representation; (3 cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4 reinforcing-by-link training leads to well-perceived auditory--semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.

  15. Mapping the Tonotopic Organization in Human Auditory Cortex with Minimally Salient Acoustic Stimulation

    NARCIS (Netherlands)

    Langers, Dave R. M.; van Dijk, Pim

    2012-01-01

    Despite numerous neuroimaging studies, the tonotopic organization in human auditory cortex is not yet unambiguously established. In this functional magnetic resonance imaging study, 20 subjects were presented with low-level task-irrelevant tones to avoid spread of cortical activation. Data-driven an

  16. Organization of the auditory brainstem in a lizard, Gekko gecko. I. Auditory nerve, cochlear nuclei, and superior olivary nuclei

    DEFF Research Database (Denmark)

    Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.

    2012-01-01

    We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...... of auditory connections in lizards and archosaurs but also different processing of low- and high-frequency information in the brainstem. J. Comp. Neurol. 520:17841799, 2012. (C) 2011 Wiley Periodicals, Inc...

  17. Attentional modulation and domain-specificity underlying the neural organization of auditory categorical perception.

    Science.gov (United States)

    Bidelman, Gavin M; Walker, Breya S

    2017-03-01

    Categorical perception (CP) is highly evident in audition when listeners' perception of speech sounds abruptly shifts identity despite equidistant changes in stimulus acoustics. While CP is an inherent property of speech perception, how (if) it is expressed in other auditory modalities (e.g., music) is less clear. Moreover, prior neuroimaging studies have been equivocal on whether attentional engagement is necessary for the brain to categorically organize sound. To address these questions, we recorded neuroelectric brain responses [event-related potentials (ERPs)] from listeners as they rapidly categorized sounds along a speech and music continuum (active task) or during passive listening. Behaviorally, listeners' achieved sharper psychometric functions and faster identification for speech than musical stimuli, which was perceived in a continuous mode. Behavioral results coincided with stronger ERP differentiation between prototypical and ambiguous tokens (i.e., categorical processing) for speech but not for music. Neural correlates of CP were only observed when listeners actively attended to the auditory signal. These findings were corroborated by brain-behavior associations; changes in neural activity predicted more successful CP (psychometric slopes) for active but not passively evoked ERPs. Our results demonstrate auditory categorization is influenced by attention (active > passive) and is stronger for more familiar/overlearned stimulus domains (speech > music). In contrast to previous studies examining highly trained listeners (i.e., musicians), we infer that (i) CP skills are largely domain-specific and do not generalize to stimuli for which a listener has no immediate experience and (ii) categorical neural processing requires active engagement with the auditory stimulus.

  18. An organization of visual and auditory fear conditioning in the lateral amygdala.

    Science.gov (United States)

    Bergstrom, Hadley C; Johnson, Luke R

    2014-12-01

    Pavlovian fear conditioning is an evolutionary conserved and extensively studied form of associative learning and memory. In mammals, the lateral amygdala (LA) is an essential locus for Pavlovian fear learning and memory. Despite significant progress unraveling the cellular mechanisms responsible for fear conditioning, very little is known about the anatomical organization of neurons encoding fear conditioning in the LA. One key question is how fear conditioning to different sensory stimuli is organized in LA neuronal ensembles. Here we show that Pavlovian fear conditioning, formed through either the auditory or visual sensory modality, activates a similar density of LA neurons expressing a learning-induced phosphorylated extracellular signal-regulated kinase (p-ERK1/2). While the size of the neuron population specific to either memory was similar, the anatomical distribution differed. Several discrete sites in the LA contained a small but significant number of p-ERK1/2-expressing neurons specific to either sensory modality. The sites were anatomically localized to different levels of the longitudinal plane and were independent of both memory strength and the relative size of the activated neuronal population, suggesting some portion of the memory trace for auditory and visually cued fear conditioning is allocated differently in the LA. Presenting the visual stimulus by itself did not activate the same p-ERK1/2 neuron density or pattern, confirming the novelty of light alone cannot account for the specific pattern of activated neurons after visual fear conditioning. Together, these findings reveal an anatomical distribution of visual and auditory fear conditioning at the level of neuronal ensembles in the LA.

  19. Shaping the mammalian auditory sensory organ by the planar cell polarity pathway.

    Science.gov (United States)

    Kelly, Michael; Chen, Ping

    2007-01-01

    The human ear is capable of processing sound with a remarkable resolution over a wide range of intensity and frequency. This ability depends largely on the extraordinary feats of the hearing organ, the organ of Corti and its sensory hair cells. The organ of Corti consists of precisely patterned rows of sensory hair cells and supporting cells along the length of the snail-shaped cochlear duct. On the apical surface of each hair cell, several rows of actin-containing protrusions, known as stereocilia, form a "V"-shaped staircase. The vertices of all the "V"-shaped stereocilia point away from the center of the cochlea. The uniform orientation of stereocilia in the organ of Corti manifests a distinctive form of polarity known as planar cell polarity (PCP). Functionally, the direction of stereociliary bundle deflection controls the mechanical channels located in the stereocilia for auditory transduction. In addition, hair cells are tonotopically organized along the length of the cochlea. Thus, the uniform orientation of stereociliary bundles along the length of the cochlea is critical for effective mechanotransduction and for frequency selection. Here we summarize the morphological and molecular events that bestow the structural characteristics of the mammalian hearing organ, the growth of the snail-shaped cochlear duct and the establishment of PCP in the organ of Corti. The PCP of the sensory organs in the vestibule of the inner ear will also be described briefly.

  20. Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.

    Science.gov (United States)

    Bidelman, Gavin M; Grall, Jeremy

    2014-11-01

    Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes.

  1. Bedside Evaluation of the Functional Organization of the Auditory Cortex in Patients with Disorders of Consciousness.

    Science.gov (United States)

    Henriques, Julie; Pazart, Lionel; Grigoryeva, Lyudmila; Muzard, Emelyne; Beaussant, Yvan; Haffen, Emmanuel; Moulin, Thierry; Aubry, Régis; Ortega, Juan-Pablo; Gabriel, Damien

    2016-01-01

    To measure the level of residual cognitive function in patients with disorders of consciousness, the use of electrophysiological and neuroimaging protocols of increasing complexity is recommended. This work presents an EEG-based method capable of assessing at an individual level the integrity of the auditory cortex at the bedside of patients and can be seen as the first cortical stage of this hierarchical approach. The method is based on two features: first, the possibility of automatically detecting the presence of a N100 wave and second, in showing evidence of frequency processing in the auditory cortex with a machine learning based classification of the EEG signals associated with different frequencies and auditory stimulation modalities. In the control group of twelve healthy volunteers, cortical frequency processing was clearly demonstrated. EEG recordings from two patients with disorders of consciousness showed evidence of partially preserved cortical processing in the first patient and none in the second patient. From these results, it appears that the classification method presented here reliably detects signal differences in the encoding of frequencies and is a useful tool in the evaluation of the integrity of the auditory cortex. Even though the classification method presented in this work was designed for patients with disorders of consciousness, it can also be applied to other pathological populations.

  2. Comparative observation of protective effects of earplug and barrel on auditory organs of guinea pigs exposed to experimental blast underpressure

    Institute of Scientific and Technical Information of China (English)

    LI Chao-jun; ZHU Pei-fang; LIU Zhao-hua; WANG Zheng-guo; YANG Cheng; CHEN Hai-bin; NING Xin; ZHOU Ji-hong; Chen Jian

    2006-01-01

    Objective: To explore the protective effects of earplug and barrel on auditory organs of guinea pigs exposed to experimental blast underpressure (BUP).Methods: The hearing thresholds of the guinea pigs were assessed with auditory brainstem responses (ABR).The traumatic levels of tympanic membrane and ossicular chain were observed under stereo-microscope. The rate of outer hair cells (OHCs) loss was analyzed using a light microscope. The changes of guinea pigs protected with barrel and earplug were compared with those of the control group without any protection.Results: An important ABR threshold shift of the guinea pigs without any protection was detected from 8h to 14d after being exposed to BUP with a peak ranging from -64.5kPa to -69.3kPa (P<0.01). The rate of perforation of tympanic membrane reached 87.5 % and that of total OHCs loss was 19.46% + 5.38% at 14d after exposure. The guinea pigs protected with barrel and earplug had lower ABR threshold and total OHCs loss rate compared with the animals without any protection (P < 0.01 ). All of the tympanic membrane and ossicular chain of the protected animals maintained their integrities.Meanwhile, the guinea pigs protected with the barrel had lower ABR threshold and total OHCs loss rate than those with earplug (P<0.01).Conclusions: The earplug and barrel have protective effects against BUP-induced trauma on auditory organs of the guinea pigs and the protective effects of barrel are better than those of earplug.

  3. Stimulus transmission in the auditory receptor organs of the foreleg of bushcrickets (Tettigoniidae) I. The role of the tympana.

    Science.gov (United States)

    Bangert, M; Kalmring, K; Sickmann, T; Stephen, R; Jatho, M; Lakes-Harlan, R

    1998-01-01

    The auditory organs of the tettigoniid are located just below the femoral tibial joint in the forelegs. Structurally each auditory organ consists of a tonotopically organized crista acustica and intermediate organ and associated sound conducting structures; an acoustic trachea and two lateral tympanic membranes located at the level of the receptor complex. The receptor cells and associated satellite structures are located in a channel filled with hemolymph fluid. The vibratory response characteristics of the tympanic membranes generated by sound stimulation over the frequency range 2-40 kHz have been studied using laser vibrometry. The acoustic trachea was found to be the principal structure through which sound energy reached the tympana. The velocity of propagation down the trachea was observed to be independent of the frequency and appreciably lower than the velocity of sound in free space. Structurally the tympana are found to be partially in contact with the air in the trachea and with the hemolymph in the channel containing the receptor cells. The two tympana were found to oscillate in phase, with a broad band frequency response, have linear coherent response characteristics and small time constant. Higher modes of vibration were not observed. Measurements of the pattern of vibration of the tympana showed that these structures vibrate as hinged flaps rather than vibrating stretched membranes. These findings, together with the morphology of the organ and physiological data from the receptor cells, suggest the possibility of an impedance matching function for the tympana in the transmission of acoustic energy to the receptor cells in the tettigoniid ear.

  4. Segregation of vowels and consonants in human auditory cortex: Evidence for distributed hierarchical organization

    Directory of Open Access Journals (Sweden)

    Jonas eObleser

    2010-12-01

    Full Text Available The speech signal consists of a continuous stream of consonants and vowels, which must be de– and encoded in human auditory cortex to ensure the robust recognition and categorization of speech sounds. We used small-voxel functional magnetic resonance imaging (fMRI to study information encoded in local brain activation patterns elicited by consonant-vowel syllables, and by a control set of noise bursts.First, activation of anterior–lateral superior temporal cortex was seen when controlling for unspecific acoustic processing (syllables versus band-passed noises, in a classic subtraction-based design. Second, a classifier algorithm, which was trained and tested iteratively on data from all subjects to discriminate local brain activation patterns, yielded separations of cortical patches discriminative of vowel category versus patches discriminative of stop-consonant category across the entire superior temporal cortex, yet with regional differences in average classification accuracy. Overlap (voxels correctly classifying both speech sound categories was surprisingly sparse. Third, lending further plausibility to the results, classification of speech–noise differences was generally superior to speech–speech classifications, with the notable exception of a left anterior region, where speech–speech classification accuracies were significantly better.These data demonstrate that acoustic-phonetic features are encoded in complex yet sparsely overlapping local patterns of neural activity distributed hierarchically across different regions of the auditory cortex. The redundancy apparent in these multiple patterns may partly explain the robustness of phonemic representations.

  5. EFFECTS OF RESTRICTED BASILAR PAPILLAR LESIONS AND HAIR CELL REGENERATION ON AUDITORY FOREBRAIN FREQUENCY ORGANIZATION IN ADULT EUROPEAN STARLINGS

    Science.gov (United States)

    Irvine, Dexter R. F.; Brown, Mel; Kamke, Marc R.; Rubel, Edwin W

    2009-01-01

    The frequency organization of neurons in the forebrain Field L complex (FLC) of adult starlings was investigated to determine the effects of hair cell (HC) destruction in the basal portion of the basilar papilla (BP) and of subsequent HC regeneration. Conventional microelectrode mapping techniques were used in normal starlings and in lesioned starlings either 2 days or 6–10 weeks after aminoglycoside treatment. Histological examination of the BP and recordings of auditory brainstem evoked responses confirmed massive loss of HCs in the basal portion of the BP and hearing losses at frequencies above 2 kHz in starlings tested 2 days after aminoglycoside treatment. In these birds, all neurons in the region of the FLC in which CFs normally increase from 2 to 6 kHz had characteristic frequency (CF) in the range 2–4 kHz. The significantly elevated thresholds of responses in this region of altered tonotopic organization indicated that they were the residue of pre-lesion responses and did not reflect central nervous system plasticity. In the long-term recovery birds, there was histological evidence of substantial HC regeneration. The tonotopic organization of the high frequency region of the FLC did not differ from that in normal starlings, but the mean threshold at CF in this frequency range was intermediate between the values in the normal and lesioned short-recovery groups. The recovery of normal tonotopicity indicates considerable stability of the topography of neuronal connections in the avian auditory system, but the residual loss of sensitivity suggests deficiencies in high-frequency HC function. PMID:19474314

  6. Effects of restricted basilar papillar lesions and hair cell regeneration on auditory forebrain frequency organization in adult European starlings.

    Science.gov (United States)

    Irvine, Dexter R F; Brown, Mel; Kamke, Marc R; Rubel, Edwin W

    2009-05-27

    The frequency organization of neurons in the forebrain Field L complex (FLC) of adult starlings was investigated to determine the effects of hair cell (HC) destruction in the basal portion of the basilar papilla (BP) and of subsequent HC regeneration. Conventional microelectrode mapping techniques were used in normal starlings and in lesioned starlings either 2 d or 6-10 weeks after aminoglycoside treatment. Histological examination of the BP and recordings of auditory brainstem evoked responses confirmed massive loss of HCs in the basal portion of the BP and hearing losses at frequencies >2 kHz in starlings tested 2 d after aminoglycoside treatment. In these birds, all neurons in the region of the FLC in which characteristic frequencies (CFs) normally increase from 2 to 6 kHz had CF in the range of 2-4 kHz. The significantly elevated thresholds of responses in this region of altered tonotopic organization indicated that they were the residue of prelesion responses and did not reflect CNS plasticity. In the long-term recovery birds, there was histological evidence of substantial HC regeneration. The tonotopic organization of the high-frequency region of the FLC did not differ from that in normal starlings, but the mean threshold at CF in this frequency range was intermediate between the values in the normal and lesioned short-recovery groups. The recovery of normal tonotopicity indicates considerable stability of the topography of neuronal connections in the avian auditory system, but the residual loss of sensitivity suggests deficiencies in high-frequency HC function.

  7. Hey2 functions in parallel with Hes1 and Hes5 for mammalian auditory sensory organ development

    Directory of Open Access Journals (Sweden)

    Chin Michael T

    2008-02-01

    Full Text Available Abstract Background During mouse development, the precursor cells that give rise to the auditory sensory organ, the organ of Corti, are specified prior to embryonic day 14.5 (E14.5. Subsequently, the sensory domain is patterned precisely into one row of inner and three rows of outer sensory hair cells interdigitated with supporting cells. Both the restriction of the sensory domain and the patterning of the sensory mosaic of the organ of Corti involve Notch-mediated lateral inhibition and cellular rearrangement characteristic of convergent extension. This study explores the expression and function of a putative Notch target gene. Results We report that a putative Notch target gene, hairy-related basic helix-loop-helix (bHLH transcriptional factor Hey2, is expressed in the cochlear epithelium prior to terminal differentiation. Its expression is subsequently restricted to supporting cells, overlapping with the expression domains of two known Notch target genes, Hairy and enhancer of split homolog genes Hes1 and Hes5. In combination with the loss of Hes1 or Hes5, genetic inactivation of Hey2 leads to increased numbers of mis-patterned inner or outer hair cells, respectively. Surprisingly, the ectopic hair cells in Hey2 mutants are accompanied by ectopic supporting cells. Furthermore, Hey2-/-;Hes1-/- and Hey2-/-;Hes1+/- mutants show a complete penetrance of early embryonic lethality. Conclusion Our results indicate that Hey2 functions in parallel with Hes1 and Hes5 in patterning the organ of Corti, and interacts genetically with Hes1 for early embryonic development and survival. Our data implicates expansion of the progenitor pool and/or the boundaries of the developing sensory organ to account for patterning defects observed in Hey2 mutants.

  8. Activity in a premotor cortical nucleus of zebra finches is locally organized and exhibits auditory selectivity in neurons but not in glia.

    Directory of Open Access Journals (Sweden)

    Michael H Graber

    Full Text Available Motor functions are often guided by sensory experience, most convincingly illustrated by complex learned behaviors. Key to sensory guidance in motor areas may be the structural and functional organization of sensory inputs and their evoked responses. We study sensory responses in large populations of neurons and neuron-assistive cells in the songbird motor area HVC, an auditory-vocal brain area involved in sensory learning and in adult song production. HVC spike responses to auditory stimulation display remarkable preference for the bird's own song (BOS compared to other stimuli. Using two-photon calcium imaging in anesthetized zebra finches we measure the spatio-temporal structure of baseline activity and of auditory evoked responses in identified populations of HVC cells. We find strong correlations between calcium signal fluctuations in nearby cells of a given type, both in identified neurons and in astroglia. In identified HVC neurons only, auditory stimulation decorrelates ongoing calcium signals, less for BOS than for other sound stimuli. Overall, calcium transients show strong preference for BOS in identified HVC neurons but not in astroglia, showing diversity in local functional organization among identified neuron and astroglia populations.

  9. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  10. The E3 ligase Ubr3 regulates Usher syndrome and MYH9 disorder proteins in the auditory organs of Drosophila and mammals

    Science.gov (United States)

    Li, Tongchao; Giagtzoglou, Nikolaos; Eberl, Daniel F; Jaiswal, Sonal Nagarkar; Cai, Tiantian; Godt, Dorothea; Groves, Andrew K; Bellen, Hugo J

    2016-01-01

    Myosins play essential roles in the development and function of auditory organs and multiple myosin genes are associated with hereditary forms of deafness. Using a forward genetic screen in Drosophila, we identified an E3 ligase, Ubr3, as an essential gene for auditory organ development. Ubr3 negatively regulates the mono-ubiquitination of non-muscle Myosin II, a protein associated with hearing loss in humans. The mono-ubiquitination of Myosin II promotes its physical interaction with Myosin VIIa, a protein responsible for Usher syndrome type IB. We show that ubr3 mutants phenocopy pathogenic variants of Myosin II and that Ubr3 interacts genetically and physically with three Usher syndrome proteins. The interactions between Myosin VIIa and Myosin IIa are conserved in the mammalian cochlea and in human retinal pigment epithelium cells. Our work reveals a novel mechanism that regulates protein complexes affected in two forms of syndromic deafness and suggests a molecular function for Myosin IIa in auditory organs. DOI: http://dx.doi.org/10.7554/eLife.15258.001 PMID:27331610

  11. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  12. cDNA cloning, tissue distribution, and chromosomal localization of Ocp2, a gene encoding a putative transcription-associated factor predominantly expressed in the auditory organs

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hong; Thalmann, I.; Thalmann, R. [Washington Univ., St. Louis, MO (United States)] [and others

    1995-06-10

    We report the cloning of the Ocp2 gene encoding OCP-II from a guinea pig organ-of-Corti cDNA library. The predicted open reading frame encodes a protein of 163 amino acids with an estimated molecular mass of 18.6 kDa. A homology search revealed that Ocp2 shares significant sequence similarity with p15, a sub-unit of transcription factor SIII that regulates the activity of the RNA polymerase II elongation complex. The Ocp2 messenger RNA is expressed abundantly in the cochlea while not significantly in any other tissues examined, including brain, eye, heart, intestine, kidney, liver, lung, thigh muscle, and testis, demonstrating that the expression of this gene may be restricted to auditory organs. A polyclonal antiserum was raised against the N-terminal region of OCP-II. Immunohistochemical staining of paraffin-embedded sections of the cochlea showed that OCP-II is localized abundantly in nonsensory cells in the organ of Corti; in addition, it was also detected, at a lower concentration, in vestibular sensory organs, as well as auditory and vestibular brain stem nuclei. The Ocp2 gene was mapped to mouse chromosome 4 as well as 11. Our results suggest that OCP-II may be involved in transcription regulation for the development or maintenance of specialized functions of the inner ear. 40 refs., 5 figs.

  13. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  14. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  15. The auditory organ: active amplifier and highly sensitive measuring system; Das Hoerorgan: Aktiver Schallverstaerker und hochempfindliches Messsystem

    Energy Technology Data Exchange (ETDEWEB)

    Kafka-Luetzow, A. [Univ. Wien (Austria). Inst. fuer Allgemeine und Vergleichende Physiologie

    1997-12-01

    The present paper provides a brief review on topical issues of auditory physiology. Recent data on transduction mechanism and adaptation in hair cells as well as on the possible role of outer hair cells in amplifying basilar membrane motion are presented. Strategies of present physiological research in dealing with sensorineural deafness are discussed. (orig.) [Deutsch] Neuere Erkenntnisse der Hoerphysiologie haben einige der Mechanismen aufgezeigt, welche fuer die hohe Empfindlichkeit, die gute Frequenzdiskrimination und das bei Lautstaerkenerhoehung nicht lineare Verhalten dieses Sinnessystems verantwortlich sein duerften. Demnach haben die 2 Typen akustischer Sinneszellen voellig unterschiedliche Funktionen. Nur ein Typ, die Inneren Haarzellen, duerften Sensoren im engeren Sinn sein, indem sie die wesentliche akustische Information an das Zentralnervensystem liefern. Der zweite Typ, die Aeusseren Haarzellen scheinen vornehmlich als Eingangsverstaerker zu fungieren. Sie setzen die bei Schalleinwirkung auf das Ohr an ihrer Membran auftretende Potentialaenderung in rasche Laengskontraktionen um. Damit verstaerken sie die durch die Schalleinwirkung ausgeloesten Basilarmembranschwingungen. Ausserdem duerften die von den Aeusseren Haarzellen aktiv erzeugten Schwingungen die Quelle der im aeusseren Gehoergang messbaren `otoakustischen Emissionen` sein. Die gegenstaendliche Uebersicht fasst den aktuellen Wissensstand ueber den Transduktionsmechanismus und die Elektromotilitaet der Haarzellen zusammen. Darueber hinaus wird die moegliche auditive Funktion von Haarzellen im Gleichgewichtssystem sowie Befunde aus der in den letzten Jahren entbrannten Diskussion um eine allfaellige Regeneration von Haarzellen aus dem vestibulocochleaeren System von adulten Saeugern diskutiert. Im Zusammenhang mit der Druckausbreitung im Innenohr werden einige morphologische Besonderheiten insbesonders der cochleaeren Fluessigkeitsraeume und ihrer Verbindungen sowie deren funktionelle

  16. Otalgia and eschar in the external auditory canal in scrub typhus complicated by acute respiratory distress syndrome and multiple organ failure

    Directory of Open Access Journals (Sweden)

    Hu Sung-Yuan

    2011-03-01

    Full Text Available Abstract Background Scrub typhus, a mite-transmitted zoonosis caused by Orientia tsutsugamushi, is an endemic disease in Taiwan and may be potentially fatal if diagnosis is delayed. Case presentations We encountered a 23-year-old previously healthy Taiwanese male soldier presenting with the right ear pain after training in the jungle and an eleven-day history of intermittent high fever up to 39°C. Amoxicillin/clavulanate was prescribed for otitis media at a local clinic. Skin rash over whole body and abdominal cramping pain with watery diarrhea appeared on the sixth day of fever. He was referred due to progressive dyspnea and cough for 4 days prior to admission in our institution. On physical examination, there were cardiopulmonary distress, icteric sclera, an eschar in the right external auditory canal and bilateral basal rales. Laboratory evaluation revealed thrombocytopenia, elevation of liver function and acute renal failure. Chest x-ray revealed bilateral diffuse infiltration. Doxycycline was prescribed for scrub typhus with acute respiratory distress syndrome and multiple organ failure. Fever subsided dramatically the next day and he was discharged on day 7 with oral tetracycline for 7 days. Conclusion Scrub typhus should be considered in acutely febrile patients with multiple organ involvement, particularly if there is an eschar or a history of environmental exposure in endemic areas. Rapid and accurate diagnosis, timely administration of antibiotics and intensive supportive care are necessary to decrease mortality of serious complications of scrub typhus.

  17. The auditory-vibratory system of the bushcricket Polysarcus denticauda (Phaneropterinae, Tettigoniidae). I. Morphology of the complex tibial organs.

    Science.gov (United States)

    Sickmann, T; Kalmring, K; Müller, A

    1997-02-01

    The structure of the complex tibial organs in the fore-, mid- and hindlegs of the bushcricket Polysarcus denticauda (Tettigoniidae, Phaneropterinae) is described comparatively. As is common for bushcrickets, in each leg the tibial organs consist of the subgenual and intermediate organs and the crista acustica. Only in the forelegs are sound-transmitting structures present. They consist of the spiracle, acoustic trachea, and two tympana; the latter are not protected by tympanal covers. The tympana in P. denticauda are extremely thick, not only bordering the two tracheal branches to the outside but also forming the outer wall of the hemolymph channel. The morphology of the tracheae in the mid- and hindlegs is significantly different, causing structural differences, especially in dimensions of the hemolymph channel. The number of scolopidia of the crista acustica of the foreleg is extremely high for a bushcricket. Approximately 50 receptor cells were found, about half of them being located in the distal quarter of the long axis of this organ. Some of the receptors are positioned in parallel on the dorsal wall of the anterior tracheal branch. The number, morphology and dimensions of the scolopidia within the crista acustica of the mid- and hindlegs differ significantly from those of the forelegs, decreasing in both legs to eight and seven receptor cells, respectively. Although the dimensions of the subgenual and intermediate organs are considerably larger in the mid- and hindlegs, the number of receptor cells is approximately the same in the different legs, being somewhat higher in both receptor organs than in those of many other bushcricket species studied previously.

  18. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  19. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  20. Linking topography to tonotopy in the mouse auditory thalamocortical circuit

    DEFF Research Database (Denmark)

    Hackett, Troy A; Rinaldi Barkat, Tania; O'Brien, Barbara M J;

    2011-01-01

    The mouse sensory neocortex is reported to lack several hallmark features of topographic organization such as ocular dominance and orientation columns in primary visual cortex or fine-scale tonotopy in primary auditory cortex (AI). Here, we re-examined the question of auditory functional topography...

  1. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  2. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  3. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  4. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  5. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  6. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  7. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  8. Auditory excitation patterns : the significance of the pulsation threshold method for the measurement of auditory nonlinearity

    NARCIS (Netherlands)

    H. Verschuure (Hans)

    1978-01-01

    textabstractThe auditory system is the toto[ of organs that translates an acoustical signal into the perception of a sound. An acoustic signal is a vibration. It is decribed by physical parameters. The perception of sound is the awareness of a signal being present and the attribution of certain qual

  9. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  10. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  11. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  12. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  13. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  14. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  15. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM.

    Science.gov (United States)

    Irvine, Dexter R F; Fallon, James B; Kamke, Marc R

    2006-04-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period.

  16. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM

    Science.gov (United States)

    Irvine, Dexter R. F.; Fallon, James B.; Kamke, Marc R.

    2007-01-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period. PMID:17572797

  17. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... parameters highlighting harmonious and balanced qualities while criticizing the noisy and cacophonous qualities of modern urban settings. This paper present a reaffirmation of Schafer’s central methodological claim: that environments can be analyzed through their sound, but offers considerations on the role...... musicalized through electro acoustic equipment installed in shops, shopping streets, transit areas etc. Urban noise no longer acts only as disturbance, but also structure and shape the places and spaces in which urban life enfold. Based on research done in Japanese shopping streets and in Copenhagen the paper...

  18. Metabolic emergent auditory effects by means of physical particle modeling : the example of musical sand

    OpenAIRE

    Luciani, Annie; Castagné, Nicolas; Tixier, Nicolas

    2003-01-01

    International audience; In the context of Computer Music, physical modeling is usually dedicated to the modeling of sound sources or physical instruments. This paper presents an innovative use of physical modeling in order to model and synthesize complex auditory effects such as collective acoustic phenomena producing metabolic emergent auditory organizations. As a case study, we chose the "dune effect", which in open nature leads both to visual and auditory effects. The article introduces tw...

  19. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  20. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  1. Clinical Observation on Treatment of Auditory Hallucinosis by Electroacupuncture--A Report of 30 Cases

    Institute of Scientific and Technical Information of China (English)

    Lin Hong; Li Cheng

    2005-01-01

    @@ Auditory hallucinosis, a kind of hallucinations in sensory disturbance, is very common in psychopathic clinic. Patients with this disorder could hear sounds of different variety or nature in the absence of any appropriate external stimulus. It is especially true in patients with schizophrenia, organic psychonosema,and alcoholic psychonosema. At present, the neuroleptic agents are often used to relieve auditory hallucinosis during treatment of the mental disease,and there is not a therapy that is effective in treating auditory hallucinosis. With electro-acupuncture, the authors have treated 30 cases of auditory hallucinosis with satisfactory results. A report follows.

  2. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  3. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  4. Representation of lateralization and tonotopy in primary versus secondary human auditory cortex

    NARCIS (Netherlands)

    Langers, Dave R. M.; Backes, Walter H.; van Dijk, Pim

    2007-01-01

    Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateratization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cort

  5. MAP3K1 function is essential for cytoarchitecture of the mouse organ of Corti and survival of auditory hair cells

    Directory of Open Access Journals (Sweden)

    Rizwan Yousaf

    2015-12-01

    Full Text Available MAP3K1 is a serine/threonine kinase that is activated by a diverse set of stimuli and exerts its effect through various downstream effecter molecules, including JNK, ERK1/2 and p38. In humans, mutant alleles of MAP3K1 are associated with 46,XY sex reversal. Until recently, the only phenotype observed in Map3k1tm1Yxia mutant mice was open eyelids at birth. Here, we report that homozygous Map3k1tm1Yxia mice have early-onset profound hearing loss accompanied by the progressive degeneration of cochlear outer hair cells. In the mouse inner ear, MAP3K1 has punctate localization at the apical surface of the supporting cells in close proximity to basal bodies. Although the cytoarchitecture, neuronal wiring and synaptic junctions in the organ of Corti are grossly preserved, Map3k1tm1Yxia mutant mice have supernumerary functional outer hair cells (OHCs and Deiters' cells. Loss of MAP3K1 function resulted in the downregulation of Fgfr3, Fgf8, Fgf10 and Atf3 expression in the inner ear. Fgfr3, Fgf8 and Fgf10 have a role in induction of the otic placode or in otic epithelium development in mice, and their functional deficits cause defects in cochlear morphogenesis and hearing loss. Our studies suggest that MAP3K1 has an essential role in the regulation of these key cochlear morphogenesis genes. Collectively, our data highlight the crucial role of MAP3K1 in the development and function of the mouse inner ear and hearing.

  6. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  7. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  8. Musical and auditory hallucinations: A spectrum.

    Science.gov (United States)

    E Fischer, Corinne; Marchie, Anthony; Norris, Mireille

    2004-02-01

    Musical hallucinosis is a rare and poorly understood clinical phenomenon. While an association appears to exist between this phenomenon and organic brain pathology, aging and sensory impairment the precise association remains unclear. The authors present two cases of musical hallucinosis, both in elderly patients with mild-moderate cognitive impairment and mild-moderate hearing loss, who subsequently developed auditory hallucinations and in one case command hallucinations. The literature in reference to musical hallucinosis will be reviewed and a theory relating to the development of musical hallucinations will be proposed.

  9. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  10. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  11. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  12. Enhanced representation of spectral contrasts in the primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Nicolas eCatz

    2013-06-01

    Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.

  13. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  14. Merging functional and structural properties of the monkey auditory cortex

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-07-01

    Full Text Available Recent neuroimaging studies in primates aim to define the functional properties of auditory cortical areas, especially areas beyond A1, in order to further our understanding of the auditory cortical organization. Precise mapping of functional magnetic resonance imaging (fMRI results and interpretation of their localizations among all the small auditory subfields remains challenging. To facilitate this mapping, we combined here information from cortical folding, micro-anatomy, surface-based atlas and tonotopic mapping. We used for the first time, phase-encoded fMRI design for mapping the monkey tonotopic organization. From posterior to anterior, we found a high-low-high progression of frequency preference on the superior temporal plane. We show a faithful representation of the fMRI results on a locally flattened surface of the superior temporal plane. In a tentative scheme to delineate core versus belt regions which share similar tonotopic organizations we used the ratio of T1-weighted and T2-weighted MR images as a measure of cortical myelination. Our results, presented along a co-registered surface-based atlas, can be interpreted in terms of a current model of the monkey auditory cortex.

  15. Topography of acoustic response characteristics in the auditory cortex of the Kunming mouse

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Topography of acoustic response characteristics in the auditory cortex (AC) of the Kunming (KM) mouse has been examined by using microelectrode recording techniques.Based on best-frequency (BF) maps,both the primary auditory field (AⅠ) and the anterior auditory field (AAF) are tonotopically organized with a counter running frequency gradient.Within an isofrequency stripe,the width of the frequency-threshold curves of single neurons increases,and minimum threshold (MT) decreases towards more ventral locations.BFs in AⅠand AAF range from 4 to 38 kHz.Auditory neurons with BFs above 40 kHz are located at the rostrodorsal part of the AC.The findings suggest that the KM mouse is a good model suitable for auditory research.

  16. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  17. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  18. Effects of chronic stress on the auditory system and fear learning: an evolutionary approach.

    Science.gov (United States)

    Dagnino-Subiabre, Alexies

    2013-01-01

    Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.

  19. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  20. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  1. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  2. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  3. Brainstem auditory evoked potential abnormalities in type 2 diabetes mellitus

    Directory of Open Access Journals (Sweden)

    Sharat Gupta

    2013-01-01

    Full Text Available Background: Diabetes mellitus represents a syndrome complex in which multiple organ systems, including the central nervous system, are affected. Aim: The study was conducted to determine the changes in the brainstem auditory evoked potentials in type 2 diabetes mellitus. Materials and Methods: A cross sectional study was conducted on 126 diabetic males, aged 35-50 years, and 106 age-matched, healthy male volunteers. Brainstem auditory evoked potentials were recorded and the results were analyzed statistically using student′s unpaired t-test. The data consisted of wave latencies I, II, III, IV, V and interpeak latencies I-III, III-V and I-V, separately for both ears. Results: The latency of wave IV was significantly delayed only in the right ear, while the latency of waves III, V and interpeak latencies III-V, I-V showed a significant delay bilaterally in diabetic males. However, no significant difference was found between diabetic and control subjects as regards to the latency of wave IV unilaterally in the left ear and the latencies of waves I, II and interpeak latency I-III bilaterally. Conclusion: Diabetes patients have an early involvement of central auditory pathway, which can be detected with fair accuracy with auditory evoked potential studies.

  4. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  5. Formation of the avian nucleus magnocellularis from the auditory anlage.

    Science.gov (United States)

    Hendricks, Susan J; Rubel, Edwin W; Nishi, Rae

    2006-10-01

    In the avian auditory system, the neural network for computing the localization of sound in space begins with bilateral innervation of nucleus laminaris (NL) by nucleus magnocellularis (NM) neurons. We used antibodies against the neural specific markers Hu C/D, neurofilament, and SV2 together with retrograde fluorescent dextran labeling from the contralateral hindbrain to identify NM neurons within the anlage and follow their development. NM neurons could be identified by retrograde labeling as early as embryonic day (E) 6. While the auditory anlage organized itself into NM and NL in a rostral-to-caudal fashion between E6 and E8, labeled NM neurons were visible throughout the extent of the anlage at E6. By observing the pattern of neuronal rearrangements together with the pattern of contralaterally projecting NM fibers, we could identify NL in the ventral anlage. Ipsilateral NM fibers contacted the developing NL at E8, well after NM collaterals had projected contralaterally. Furthermore, the formation of ipsilateral connections between NM and NL neurons appeared to coincide with the arrival of VIIIth nerve fibers in NM. By E10, immunoreactivity for SV2 was heavily concentrated in the dorsal and ventral neuropils of NL. Thus, extensive pathfinding and morphological rearrangement of central auditory nuclei occurs well before the arrival of cochlear afferents. Our results suggest that NM neurons may play a central role in formation of tonotopic connections in the auditory system.

  6. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  7. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  8. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  9. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  10. Multiscale mapping of frequency sweep rate in mouse auditory cortex.

    Science.gov (United States)

    Issa, John B; Haeffele, Benjamin D; Young, Eric D; Yue, David T

    2017-02-01

    Functional organization is a key feature of the neocortex that often guides studies of sensory processing, development, and plasticity. Tonotopy, which arises from the transduction properties of the cochlea, is the most widely studied organizational feature in auditory cortex; however, in order to process complex sounds, cortical regions are likely specialized for higher order features. Here, motivated by the prevalence of frequency modulations in mouse ultrasonic vocalizations and aided by the use of a multiscale imaging approach, we uncover a functional organization across the extent of auditory cortex for the rate of frequency modulated (FM) sweeps. In particular, using two-photon Ca(2+) imaging of layer 2/3 neurons, we identify a tone-insensitive region at the border of AI and AAF. This central sweep region behaves fundamentally differently from nearby neurons in AI and AII, responding preferentially to fast FM sweeps but not to tones or bandlimited noise. Together these findings define a second dimension of organization in the mouse auditory cortex for sweep rate complementary to that of tone frequency.

  11. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  12. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  13. Optimizing the imaging of the monkey auditory cortex: sparse vs. continuous fMRI.

    Science.gov (United States)

    Petkov, Christopher I; Kayser, Christoph; Augath, Mark; Logothetis, Nikos K

    2009-10-01

    The noninvasive imaging of the monkey auditory system with functional magnetic resonance imaging (fMRI) can bridge the gap between electrophysiological studies in monkeys and imaging studies in humans. Some of the recent imaging of monkey auditory cortical and subcortical structures relies on a technique of "sparse imaging," which was developed in human studies to sidestep the negative influence of scanner noise by adding periods of silence in between volume acquisition. Among the various aspects that have gone into the ongoing optimization of fMRI of the monkey auditory cortex, replacing the more common continuous-imaging paradigm with sparse imaging seemed to us to make the most obvious difference in the amount of activity that we could reliably obtain from awake or anesthetized animals. Here, we directly compare the sparse- and continuous-imaging paradigms in anesthetized animals. We document a strikingly greater auditory response with sparse imaging, both quantitatively and qualitatively, which includes a more expansive and robust tonotopic organization. There were instances where continuous imaging could better reveal organizational properties that sparse imaging missed, such as aspects of the hierarchical organization of auditory cortex. We consider the choice of imaging paradigm as a key component in optimizing the fMRI of the monkey auditory cortex.

  14. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  15. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  16. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  17. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  18. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  19. Towards an auditory account of speech rhythm: application of a model of the auditory 'primal sketch' to two multi-language corpora.

    Science.gov (United States)

    Lee, Christopher S; Todd, Neil P McAngus

    2004-10-01

    The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.

  20. Rodent Auditory Perception: Critical Band Limitations and Plasticity

    Science.gov (United States)

    King, Julia; Insanally, Michele; Jin, Menghan; Martins, Ana Raquel O.; D'amour, James A.; Froemke, Robert C.

    2015-01-01

    What do animals hear? While it remains challenging to adequately assess sensory perception in animal models, it is important to determine perceptual abilities in model systems to understand how physiological processes and plasticity relate to perception, learning, and cognition. Here we discuss hearing in rodents, reviewing previous and recent behavioral experiments querying acoustic perception in rats and mice, and examining the relation between behavioral data and electrophysiological recordings from the central auditory system. We focus on measurements of critical bands, which are psychoacoustic phenomena that seem to have a neural basis in the functional organization of the cochlea and the inferior colliculus. We then discuss how behavioral training, brain stimulation, and neuropathology impact auditory processing and perception. PMID:25827498

  1. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  2. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  3. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  4. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  5. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  6. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  7. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  8. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  9. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  10. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  11. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  12. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  13. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  14. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  15. Use of auditory learning to manage listening problems in children

    OpenAIRE

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...

  16. Auditory free classification of nonnative speech

    Science.gov (United States)

    Atagi, Eriko; Bent, Tessa

    2013-01-01

    Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task—a task in which listeners freely group talkers based on audio samples—has been a useful tool for examining listeners’ representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers’ native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented—such as the listeners’ attention to the talkers’ native language and the variability of stimulus intelligibility—can influence listeners’ perceptual organization of nonnative speech. PMID:24363470

  17. Auditory-visual spatial interaction and modularity

    Science.gov (United States)

    Radeau, M

    1994-02-01

    The results of dealing with the conditions for pairing visual and auditory data coming from spatially separate locations argue for cognitive impenetrability and computational autonomy, the pairing rules being the Gestalt principles of common fate and proximity. Other data provide evidence for pairing with several properties of modular functioning. Arguments for domain specificity are inferred from comparison with audio-visual speech. Suggestion of innate specification can be found in developmental data indicating that the grouping of visual and auditory signals is supported very early in life by the same principles that operate in adults. Support for a specific neural architecture comes from neurophysiological studies of the bimodal (auditory-visual) neurons of the cat superior colliculus. Auditory-visual pairing thus seems to present the four main properties of the Fodorian module.

  18. [Approaches to therapy of auditory agnosia].

    Science.gov (United States)

    Fechtelpeter, A; Göddenhenrich, S; Huber, W; Springer, L

    1990-01-01

    In a 41-year-old stroke patient with bitemporal brain damage, we found severe signs of auditory agnosia 6 months after onset. Recognition of environmental sounds was extremely impaired when tested in a multiple choice sound-picture matching task, whereas auditory discrimination between sounds and picture identifications by written names was almost undisturbed. In a therapy experiment, we tried to enhance sound recognition via semantic categorization and association, imitation of sound and analysis of auditory features, respectively. The stimulation of conscious auditory analysis proved to be increasingly effective over a 4-week period of therapy. We were able to show that the patient's improvement was not only a simple effect of practicing, but it was stable and carried over to nontrained items.

  19. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  20. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  1. A critical period for auditory thalamocortical connectivity

    DEFF Research Database (Denmark)

    Rinaldi Barkat, Tania; Polley, Daniel B; Hensch, Takao K

    2011-01-01

    connectivity by in vivo recordings and day-by-day voltage-sensitive dye imaging in an acute brain slice preparation. Passive tone-rearing modified response strength and topography in mouse primary auditory cortex (A1) during a brief, 3-d window, but did not alter tonotopic maps in the thalamus. Gene...... locus of change for the tonotopic plasticity. The evolving postnatal connectivity between thalamus and cortex in the days following hearing onset may therefore determine a critical period for auditory processing....

  2. [A case of carcinoma adenoides cysticum in the external auditory canal].

    Science.gov (United States)

    Soboczyński, R; Wojnowski, W

    2001-01-01

    The authors present a case of a woman aged 31 with carcinoma adenoides cysticum at external auditory canal. The tumor was surgically removed; after 9 month a recrudescence was ascertained but there were no metastasis to other organs. The tumor was once more surgically removed. Now it has been a year of observation and no renewal of neoplastic process was noticed.

  3. Organizations

    DEFF Research Database (Denmark)

    Hatch, Mary Jo

    Most of us recognize that organizations are everywhere. You meet them on every street corner in the form of families and shops, study in them, work for them, buy from them, pay taxes to them. But have you given much thought to where they came from, what they are today, and what they might become...... and considers many more. Mary Jo Hatch introduces the concept of organizations by presenting definitions and ideas drawn from the a variety of subject areas including the physical sciences, economics, sociology, psychology, anthropology, literature, and the visual and performing arts. Drawing on examples from...... prehistory and everyday life, from the animal kingdom as well as from business, government, and other formal organizations, Hatch provides a lively and thought provoking introduction to the process of organization....

  4. Intracranial Electrophysiology of Auditory Selective Attention Associated with Speech Classification Tasks

    Science.gov (United States)

    Nourski, Kirill V.; Steinschneider, Mitchell; Rhone, Ariane E.; Howard III, Matthew A.

    2017-01-01

    Auditory selective attention paradigms are powerful tools for elucidating the various stages of speech processing. This study examined electrocorticographic activation during target detection tasks within and beyond auditory cortex. Subjects were nine neurosurgical patients undergoing chronic invasive monitoring for treatment of medically refractory epilepsy. Four subjects had left hemisphere electrode coverage, four had right coverage and one had bilateral coverage. Stimuli were 300 ms complex tones or monosyllabic words, each spoken by a different male or female talker. Subjects were instructed to press a button whenever they heard a target corresponding to a specific stimulus category (e.g., tones, animals, numbers). High gamma (70–150 Hz) activity was simultaneously recorded from Heschl’s gyrus (HG), superior, middle temporal and supramarginal gyri (STG, MTG, SMG), as well as prefrontal cortex (PFC). Data analysis focused on: (1) task effects (non-target words in tone detection vs. semantic categorization task); and (2) target effects (words as target vs. non-target during semantic classification). Responses within posteromedial HG (auditory core cortex) were minimally modulated by task and target. Non-core auditory cortex (anterolateral HG and lateral STG) exhibited sensitivity to task, with a smaller proportion of sites showing target effects. Auditory-related areas (MTG and SMG) and PFC showed both target and, to a lesser extent, task effects, that occurred later than those in the auditory cortex. Significant task and target effects were more prominent in the left hemisphere than in the right. Findings demonstrate a hierarchical organization of speech processing during auditory selective attention. PMID:28119593

  5. A corollary discharge mechanism modulates central auditory processing in singing crickets.

    Science.gov (United States)

    Poulet, J F A; Hedwig, B

    2003-03-01

    Crickets communicate using loud (100 dB SPL) sound signals that could adversely affect their own auditory system. To examine how they cope with this self-generated acoustic stimulation, intracellular recordings were made from auditory afferent neurons and an identified auditory interneuron-the Omega 1 neuron (ON1)-during pharmacologically elicited singing (stridulation). During sonorous stridulation, the auditory afferents and ON1 responded with bursts of spikes to the crickets' own song. When the crickets were stridulating silently, after one wing had been removed, only a few spikes were recorded in the afferents and ON1. Primary afferent depolarizations (PADs) occurred in the terminals of the auditory afferents, and inhibitory postsynaptic potentials (IPSPs) were apparent in ON1. The PADs and IPSPs were composed of many summed, small-amplitude potentials that occurred at a rate of about 230 Hz. The PADs and the IPSPs started during the closing wing movement and peaked in amplitude during the subsequent opening wing movement. As a consequence, during silent stridulation, ON1's response to acoustic stimuli was maximally inhibited during wing opening. Inhibition coincides with the time when ON1 would otherwise be most strongly excited by self-generated sounds in a sonorously stridulating cricket. The PADs and the IPSPs persisted in fictively stridulating crickets whose ventral nerve cord had been isolated from muscles and sense organs. This strongly suggests that the inhibition of the auditory pathway is the result of a corollary discharge from the stridulation motor network. The central inhibition was mimicked by hyperpolarizing current injection into ON1 while it was responding to a 100 dB SPL sound pulse. This suppressed its spiking response to the acoustic stimulus and maintained its response to subsequent, quieter stimuli. The corollary discharge therefore prevents auditory desensitization in stridulating crickets and allows the animals to respond to external

  6. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  7. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  8. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  9. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  10. Auditory dysfunction associated with solvent exposure

    Directory of Open Access Journals (Sweden)

    Fuente Adrian

    2013-01-01

    Full Text Available Abstract Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL. The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA, transient evoked otoacoustic emissions (TEOAE, Random Gap Detection (RGD and Hearing-in-Noise test (HINT. Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed.

  11. Long Latency Auditory Evoked Potentials during Meditation.

    Science.gov (United States)

    Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya

    2015-10-01

    The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex.

  12. Adult-onset juvenile xanthogranuloma of the external auditory canal: A case report

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Joon Ho; Kim, Jae Kyun; Seo, Gi Young; Choi, Woo Sun; Byun, Jun Soo; Lee, Woong Jae; Lee, Tae Jin [Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul (Korea, Republic of); Kim, Na Ra [Dept. of Radiology, Samsung Medical Center, Sungkyunkwan University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    Juvenile xanthogranuloma (JXG) is a benign, spontaneously regressing lesion that usually occurs during the first year of life, but may also occur in adulthood. Although the most common presentation of JXG is the cutaneous lesion, it can also manifest in various visceral organs. JXG of the external auditory canal is extremely rare, and there have been only a few reports of those cases in the English literature. In this study, we present a case of pathologically proven JXG that occurred in the external auditory canal with a symptomatic clinical presentation.

  13. Entropical Aspects in Auditory Processes and Psychoacoustical Law of Weber-Fechner

    Science.gov (United States)

    Cosma, I.; Popescu, D. I.

    For hearing sense, the mechanoreceptors fire action potentials when their membranes are physically stretched. Based on the statistical physics, we analyzed the entropical aspects in auditory processes of hearing. We develop a model that connects the logarithm of relative intensity of sound (loudness) to the level of energy disorder within the system of cellular sensory system. The increasing of entropy and disorder in the system is connected to the free energy available to signal the production of action potentials in inner hair cells of the vestibulocochlear auditory organ.

  14. [Presbycusis: neural degeneration and aging on the auditory receptor of C57/BL6J mice].

    Science.gov (United States)

    Castillo, E; Carricondo, F; Bartolomé, M V; Vicente-Torres, A; Poch Broto, J; Gil-Loyzaga, P

    2006-11-01

    Presbycusis is a progressive hearing impairment associated with aging, characterized by hearing loss and a degeneration of cochlear structures. In this paper we analyze the effects of aging on the auditory system of C57/BL6J mice, with electrophysiological and morphological studies. With this aim the auditory potentials of mice aging 1, 3, 6, 9, 12, 15, 18, 21 and 24 months were recorded, and then the morphology of the cochleal were analyzed. Auditory potentials revealed an increase in wave latencies, as well as a decrease in their amplitudes during aging. Morphological results showed a total Corti's organ degeneration, being replaced by a flat epithelial layer, and a total absence of hair cells.

  15. Membrane potential dynamics of populations of cortical neurons during auditory streaming.

    Science.gov (United States)

    Farley, Brandon J; Noreña, Arnaud J

    2015-10-01

    How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts.

  16. Optogenetic stimulation of the auditory pathway for research and future prosthetics.

    Science.gov (United States)

    Moser, Tobias

    2015-10-01

    Sound is encoded by spiral ganglion neurons (SGNs) in the hearing organ, the cochlea, with great temporal, spectral and intensity resolution. When hearing fails, electric stimulation by implanted prostheses can partially restore hearing. Optical stimulation promises a fundamental advance of hearing restoration over electric prostheses since light can be conveniently focused and hence might dramatically improve frequency resolution of sound encoding. Combining optogenetic manipulation of neurons with innovative optical stimulation technology promises versatile spatiotemporal stimulation patterns in the auditory system. Therefore, using optical stimulation of SGNs also has great potential for auditory research. Here, I review recent progress in optogenetic stimulation of the auditory system and its potential for future application in research and hearing restoration.

  17. Morphological and physiological regeneration in the auditory system of adult Mecopoda elongata (Orthoptera: Tettigoniidae).

    Science.gov (United States)

    Krüger, Silke; Butler, Casey S; Lakes-Harlan, Reinhard

    2011-02-01

    Orthopterans are suitable model organisms for investigations of regeneration mechanisms in the auditory system. Regeneration has been described in the auditory systems of locusts (Caelifera) and of crickets (Ensifera). In this study, we comparatively investigate the neural regeneration in the auditory system in the bush cricket Mecopoda elongata. A crushing of the tympanal nerve in the foreleg of M. elongata results in a loss of auditory information transfer. Physiological recordings of the tympanal nerve suggest outgrowing fibers 5 days after crushing. An anatomical regeneration of the fibers within the central nervous system starts 10 days after crushing. The neuronal projection reaches the target area at day 20. Threshold values to low frequency airborne sound remain high after crushing, indicating a lower regeneration capability of this group of fibers. However, within the central target area the low frequency areas are also innervated. Recordings of auditory interneurons show that the regenerating fibers form new functional connections starting at day 20 after crushing.

  18. Auditory function in vestibular migraine

    Directory of Open Access Journals (Sweden)

    John Mathew

    2016-01-01

    Full Text Available Introduction: Vestibular migraine (VM is a vestibular syndrome seen in patients with migraine and is characterized by short spells of spontaneous or positional vertigo which lasts between a few seconds to weeks. Migraine and VM are considered to be a result of chemical abnormalities in the serotonin pathway. Neuhauser′s diagnostic criteria for vestibular migraine is widely accepted. Research on VM is still limited and there are few studies which have been published on this topic. Materials and Methods: This study has two parts. In the first part, we did a retrospective chart review of eighty consecutive patients who were diagnosed with vestibular migraine and determined the frequency of auditory dysfunction in these patients. The second part was a prospective case control study in which we compared the audiological parameters of thirty patients diagnosed with VM with thirty normal controls to look for any significant differences. Results: The frequency of vestibular migraine in our population is 22%. The frequency of hearing loss in VM is 33%. Conclusion: There is a significant difference between cases and controls with regards to the presence of distortion product otoacoustic emissions in both ears. This finding suggests that the hearing loss in VM is cochlear in origin.

  19. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  20. Frequency Transformation in the Auditory Lemniscal Thalamocortical System.

    Directory of Open Access Journals (Sweden)

    Kazuo eImaizumi

    2014-07-01

    Full Text Available The auditory lemniscal thalamocortical (TC pathway conveys information from the ventral division of the medial geniculate body to the primary auditory cortex (A1. Although their general topographic organization has been well characterized, functional transformations at the lemniscal TC synapse still remain incompletely codified, largely due to the need for integration of functional anatomical results with the variability observed with various animal models and experimental techniques. In this review, we discuss these issues with classical approaches, such as in vivo extracellular recordings and tracer injections to physiologically identified areas in A1, and then compare these studies with modern approaches, such as in vivo two-photon calcium imaging, in vivo whole-cell recordings, optogenetic methods, and in vitro methods using slice preparations. A surprising finding from a comparison of classical and modern approaches is the similar degree of convergence from thalamic neurons to single A1 neurons and clusters of A1 neurons, although, thalamic convergence to single A1 neurons is more restricted areas within putative thalamic frequency lamina. These comparisons suggest that frequency convergence from thalamic input to A1 is functionally limited. Finally, we consider synaptic organization of TC projections and future directions for research.

  1. Current status of auditory aging and anti-aging research.

    Science.gov (United States)

    Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei

    2014-01-01

    The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions.

  2. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  3. [Physiognomy-accompanying auditory hallucinations in schizophrenia: psychopathological investigation of 10 patients].

    Science.gov (United States)

    Nagashima, Hideaki; Kobayashi, Toshiyuki

    2010-01-01

    hallucinations of the speaker's face in "physiognomy-accompanying auditory hallucinations" has the same structural nature as typical auditory hallucinations in schizophrenia. Further the symptoms are different from the organic hallucinations caused by the pathology of consciousness. After onset the patients seem to restructure their living world to coexist with the symptoms. What we can do in treatment is to regard coexistence with the symptoms as a way of life and consider the roles of the symptoms in the patients' world and grope for the possibility that the patients may be able to live without depending on the pathological world.

  4. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  5. Facilitated auditory detection for speech sounds.

    Science.gov (United States)

    Signoret, Carine; Gaudrain, Etienne; Tillmann, Barbara; Grimault, Nicolas; Perrin, Fabien

    2011-01-01

    If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo-words, and complex non-phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub-threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2) that was followed by a two alternative forced-choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo-words) were better detected than non-phonological stimuli (complex sounds), presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo-words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non-speech processing could not be attributed to energetic differences in the stimuli.

  6. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  7. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  8. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  9. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  10. Auditory imagery and the poor-pitch singer.

    Science.gov (United States)

    Pfordresher, Peter Q; Halpern, Andrea R

    2013-08-01

    The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.

  11. Intradermal melanocytic nevus of the external auditory canal.

    Science.gov (United States)

    Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P

    2005-01-01

    Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.

  12. Therapeutic potential of stem cells in auditory hair cell repair

    Directory of Open Access Journals (Sweden)

    Ryuji Hata

    2009-01-01

    Full Text Available The prevalence of acquired hearing loss is very high. About 10% of the total population and more than one third of the population over 65 years suffer from debilitating hearing loss. The most common type of hearing loss in adults is idiopathic sudden sensorineural hearing loss (ISSHL. In the majority of cases, ISSHL is permanent and typically associated with loss of sensory hair cells in the organ of Corti. Following the loss of sensory hair cells, the auditory neurons undergo secondary degeneration. Sensory hair cells and auditory neurons do not regenerate throughout life, and loss of these cells is irreversible and cumulative. However, recent advances in stem cell biology have gained hope that stem cell therapy comes closer to regenerating sensory hair cells in humans. A major advance in the prospects for the use of stem cells to restore normal hearing comes with the recent discovery that hair cells can be generated ex vivo from embryonic stem (ES cells, adult inner ear stem cells and neural stem cells. Furthermore, there is increasing evidence that stem cells can promote damaged cell repair in part by secreting diffusible molecules such as growth factors. These results suggest that stem-cell-based treatment regimens can be applicable to the damaged inner ear as future clinical applications.Previously we have established an animal model of cochlear ischemia in gerbils and showed progressive hair cell loss up to 4 days after ischemia. Auditory brain stem response (ABR recordings have demonstrated that this gerbil model displays severe deafness just after cochlear ischemia and gradually recovers thereafter. These pathological findings and clinical manifestations are reminiscent of ISSHL in humans. In this study, we have shown the effectiveness of stem cell therapy by using this animal model of ISSHL.

  13. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  14. ABR and auditory P300 findings inchildren with ADHD

    OpenAIRE

    Schochat Eliane; Scheuer Claudia Ines; Andrade Ênio Roberto de

    2002-01-01

    Auditory processing disorders (APD), also referred as central auditory processing disorders (CAPD) and attention deficit hyperactivity disorders (ADHD) have become popular diagnostic entities for school age children. It has been demonstrated a high incidence of comorbid ADHD with communication disorders and auditory processing disorder. The aim of this study was to investigate ABR and P300 auditory evoked potentials in children with ADHD, in a double-blind study. Twenty-one children, ages bet...

  15. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  16. Are auditory percepts determined by experience?

    Science.gov (United States)

    Monson, Brian B; Han, Shui'Er; Purves, Dale

    2013-01-01

    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  17. Are auditory percepts determined by experience?

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    Full Text Available Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  18. Phonetic categorization in auditory word perception.

    Science.gov (United States)

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  19. [Functional neuroimaging of auditory hallucinations in schizophrenia].

    Science.gov (United States)

    Font, M; Parellada, E; Fernández-Egea, E; Bernardo, M; Lomeña, F

    2003-01-01

    The neurobiological bases underlying the generation of auditory hallucinations, a distressing and paradigmatic symptom of schizophrenia, are still unknown in spite of in-depth phenomenological descriptions. This work aims to make a critical review of the latest published literature in recent years, focusing on functional neuroimaging studies (PET, SPECT, fMRI) of auditory hallucinations. Thus, the studies are classified according to whether they are sensory activation, trait and state. The two main hypotheses proposed to explain the phenomenon, external speech vs. subvocal or inner speech, are also explained. Finally, the latest unitary theory as well as the limitations the studies published are commented on. The need to continue investigating in this field, that is still underdeveloped, is posed in order to understand better the etiopathogenesis of auditory hallucinations in schizophrenia.

  20. The mitochondrial connection in auditory neuropathy.

    Science.gov (United States)

    Cacace, Anthony T; Pinheiro, Joaquim M B

    2011-01-01

    'Auditory neuropathy' (AN), the term used to codify a primary degeneration of the auditory nerve, can be linked directly or indirectly to mitochondrial dysfunction. These observations are based on the expression of AN in known mitochondrial-based neurological diseases (Friedreich's ataxia, Mohr-Tranebjærg syndrome), in conditions where defects in axonal transport, protein trafficking, and fusion processes perturb and/or disrupt mitochondrial dynamics (Charcot-Marie-Tooth disease, autosomal dominant optic atrophy), in a common neonatal condition known to be toxic to mitochondria (hyperbilirubinemia), and where respiratory chain deficiencies produce reductions in oxidative phosphorylation that adversely affect peripheral auditory mechanisms. This body of evidence is solidified by data derived from temporal bone and genetic studies, biochemical, molecular biologic, behavioral, electroacoustic, and electrophysiological investigations.

  1. The auditory hallucination: a phenomenological survey.

    Science.gov (United States)

    Nayani, T H; David, A S

    1996-01-01

    A comprehensive semi-structured questionnaire was administered to 100 psychotic patients who had experienced auditory hallucinations. The aim was to extend the phenomenology of the hallucination into areas of both form and content and also to guide future theoretical development. All subjects heard 'voices' talking to or about them. The location of the voice, its characteristics and the nature of address were described. Precipitants and alleviating factors plus the effect of the hallucinations on the sufferer were identified. Other hallucinatory experiences, thought insertion and insight were examined for their inter-relationships. A pattern emerged of increasing complexity of the auditory-verbal hallucination over time by a process of accretion, with the addition of more voices and extended dialogues, and more intimacy between subject and voice. Such evolution seemed to relate to the lessening of distress and improved coping. These findings should inform both neurological and cognitive accounts of the pathogenesis of auditory hallucinations in psychotic disorders.

  2. Cooperative dynamics in auditory brain response

    CERN Document Server

    Kwapien, J; Liu, L C; Ioannides, A A

    1998-01-01

    Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  3. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  4. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  5. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  6. Transient auditory hallucinations in an adolescent.

    Science.gov (United States)

    Skokauskas, Norbert; Pillay, Devina; Moran, Tom; Kahn, David A

    2010-05-01

    In adolescents, hallucinations can be a transient illness or can be associated with non-psychotic psychopathology, psychosocial adversity, or a physical illness. We present the case of a 15-year-old secondary-school student who presented with a 1-month history of first onset auditory hallucinations, which had been increasing in frequency and severity, and mild paranoid ideation. Over a 10-week period, there was a gradual diminution, followed by a complete resolution, of symptoms. We discuss issues regarding the diagnosis and prognosis of auditory hallucinations in adolescents.

  7. The role of the auditory brainstem in processing musically relevant pitch.

    Science.gov (United States)

    Bidelman, Gavin M

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  8. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  9. The auditory representation of speech sounds in human motor cortex

    Science.gov (United States)

    Cheung, Connie; Hamilton, Liberty S; Johnson, Keith; Chang, Edward F

    2016-01-01

    In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information. DOI: http://dx.doi.org/10.7554/eLife.12577.001 PMID:26943778

  10. Temporal coding by populations of auditory receptor neurons.

    Science.gov (United States)

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  11. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.

  12. Reading adn Auditory-Visual Equivalences

    Science.gov (United States)

    Sidman, Murray

    1971-01-01

    A retarded boy, unable to read orally or with comprehension, was taught to match spoken to printed words and was then capable of reading comprehension (matching printed words to picture) and oral reading (naming printed words aloud), demonstrating that certain learned auditory-visual equivalences are sufficient prerequisites for reading…

  13. Tuning up the developing auditory CNS.

    Science.gov (United States)

    Sanes, Dan H; Bao, Shaowen

    2009-04-01

    Although the auditory system has limited information processing resources, the acoustic environment is infinitely variable. To properly encode the natural environment, the developing central auditory system becomes somewhat specialized through experience-dependent adaptive mechanisms that operate during a sensitive time window. Recent studies have demonstrated that cellular and synaptic plasticity occurs throughout the central auditory pathway. Acoustic-rearing experiments can lead to an over-representation of the exposed sound frequency, and this is associated with specific changes in frequency discrimination. These forms of cellular plasticity are manifest in brain regions, such as midbrain and cortex, which interact through feed-forward and feedback pathways. Hearing loss leads to a profound re-weighting of excitatory and inhibitory synaptic gain throughout the auditory CNS, and this is associated with an over-excitability that is observed in vivo. Further behavioral and computational analyses may provide insights into how theses cellular and systems plasticity effects underlie the development of cognitive functions such as speech perception.

  14. Auditory Integration Training: The Magical Mystery Cure.

    Science.gov (United States)

    Tharpe, Anne Marie

    1999-01-01

    This article notes the enthusiastic reception received by auditory integration training (AIT) for children with a wide variety of disorders including autism but raises concerns about this alternative treatment practice. It offers reasons for cautious evaluation of AIT prior to clinical implementation and summarizes current research findings. (DB)

  15. Development of Receiver Stimulator for Auditory Prosthesis

    Directory of Open Access Journals (Sweden)

    K. Raja Kumar

    2010-05-01

    Full Text Available The Auditory Prosthesis (AP is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP consists of Speech Data Decoder, DAC, ADC, constant current generator, electrode selection logic, switch matrix and simulated electrode resistance array. The laboratory model of speech processor is designed to implement the Continuous Interleaved Sampling (CIS speech processing algorithm which generates the information required for electrode stimulation based on the speech / audio data. Speech Data Decoder receives the encoded speech data via an inductive RF transcutaneous link from speech processor. Twelve channels of auditory Prosthesis with selectable eight electrodes for stimulation of simulated electrode resistance array are used for testing. The RSAP is validated by using the test data generated by the laboratory prototype of speech processor. The experimental results are obtained from specific speech/sound tests using a high-speed data acquisition system and found satisfactory.

  16. Auditory Processing Disorder: School Psychologist Beware?

    Science.gov (United States)

    Lovett, Benjamin J.

    2011-01-01

    An increasing number of students are being diagnosed with auditory processing disorder (APD), but the school psychology literature has largely neglected this controversial condition. This article reviews research on APD, revealing substantial concerns with assessment tools and diagnostic practices, as well as insufficient research regarding many…

  17. The Goldilocks Effect in Infant Auditory Attention

    Science.gov (United States)

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  18. Auditory Training with Frequent Communication Partners

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent; Sommers, Mitchell; Barcroft, Joe

    2016-01-01

    Purpose: Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)--speech they most likely desire to recognize--under the assumption that familiarity…

  19. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  20. Affective Priming with Auditory Speech Stimuli

    Science.gov (United States)

    Degner, Juliane

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…

  1. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    Degner, J.

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  2. Auditory pathology in cri-du-chat (5p-) syndrome: phenotypic evidence for auditory neuropathy.

    Science.gov (United States)

    Swanepoel, D

    2007-10-01

    5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.

  3. Interhemispheric auditory connectivity: structure and function related to auditory verbal hallucinations.

    Science.gov (United States)

    Steinmann, Saskia; Leicht, Gregor; Mulert, Christoph

    2014-01-01

    Auditory verbal hallucinations (AVH) are one of the most common and most distressing symptoms of schizophrenia. Despite fundamental research, the underlying neurocognitive and neurobiological mechanisms are still a matter of debate. Previous studies suggested that "hearing voices" is associated with a number of factors including local deficits in the left auditory cortex and a disturbed connectivity of frontal and temporoparietal language-related areas. In addition, it is hypothesized that the interhemispheric pathways connecting right and left auditory cortices might be involved in the pathogenesis of AVH. Findings based on Diffusion-Tensor-Imaging (DTI) measurements revealed a remarkable interindividual variability in size and shape of the interhemispheric auditory pathways. Interestingly, schizophrenia patients suffering from AVH exhibited increased fractional anisotropy (FA) in the interhemispheric fibers than non-hallucinating patients. Thus, higher FA-values indicate an increased severity of AVH. Moreover, a dichotic listening (DL) task showed that the interindividual variability in the interhemispheric auditory pathways was reflected in the behavioral outcome: stronger pathways supported a better information transfer and consequently improved speech perception. This detection indicates a specific structure-function relationship, which seems to be interindividually variable. This review focuses on recent findings concerning the structure-function relationship of the interhemispheric pathways in controls, hallucinating and non-hallucinating schizophrenia patients and concludes that changes in the structural and functional connectivity of auditory areas are involved in the pathophysiology of AVH.

  4. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.

  5. Representation of Reward Feedback in Primate Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Michael eBrosch

    2011-02-01

    Full Text Available It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1 the reward expectancy for each trial, (2 the reward size received and (3 the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  6. Representation of reward feedback in primate auditory cortex.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  7. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  8. Comparison of Electrophysiological Auditory Measures in Fishes.

    Science.gov (United States)

    Maruska, Karen P; Sisneros, Joseph A

    2016-01-01

    Sounds provide fishes with important information used to mediate behaviors such as predator avoidance, prey detection, and social communication. How we measure auditory capabilities in fishes, therefore, has crucial implications for interpreting how individual species use acoustic information in their natural habitat. Recent analyses have highlighted differences between behavioral and electrophysiologically determined hearing thresholds, but less is known about how physiological measures at different auditory processing levels compare within a single species. Here we provide one of the first comparisons of auditory threshold curves determined by different recording methods in a single fish species, the soniferous Hawaiian sergeant fish Abudefduf abdominalis, and review past studies on representative fish species with tuning curves determined by different methods. The Hawaiian sergeant is a colonial benthic-spawning damselfish (Pomacentridae) that produces low-frequency, low-intensity sounds associated with reproductive and agonistic behaviors. We compared saccular potentials, auditory evoked potentials (AEP), and single neuron recordings from acoustic nuclei of the hindbrain and midbrain torus semicircularis. We found that hearing thresholds were lowest at low frequencies (~75-300 Hz) for all methods, which matches the spectral components of sounds produced by this species. However, thresholds at best frequency determined via single cell recordings were ~15-25 dB lower than those measured by AEP and saccular potential techniques. While none of these physiological techniques gives us a true measure of the auditory "perceptual" abilities of a naturally behaving fish, this study highlights that different methodologies can reveal similar detectable range of frequencies for a given species, but absolute hearing sensitivity may vary considerably.

  9. BDNF in Lower Brain Parts Modifies Auditory Fiber Activity to Gain Fidelity but Increases the Risk for Generation of Central Noise After Injury.

    Science.gov (United States)

    Chumak, Tetyana; Rüttiger, Lukas; Lee, Sze Chim; Campanelli, Dario; Zuccotti, Annalisa; Singer, Wibke; Popelář, Jiří; Gutsche, Katja; Geisler, Hyun-Soon; Schraven, Sebastian Philipp; Jaumann, Mirko; Panford-Walsh, Rama; Hu, Jing; Schimmang, Thomas; Zimmermann, Ulrike; Syka, Josef; Knipper, Marlies

    2016-10-01

    For all sensory organs, the establishment of spatial and temporal cortical resolution is assumed to be initiated by the first sensory experience and a BDNF-dependent increase in intracortical inhibition. To address the potential of cortical BDNF for sound processing, we used mice with a conditional deletion of BDNF in which Cre expression was under the control of the Pax2 or TrkC promoter. BDNF deletion profiles between these mice differ in the organ of Corti (BDNF (Pax2) -KO) versus the auditory cortex and hippocampus (BDNF (TrkC) -KO). We demonstrate that BDNF (Pax2) -KO but not BDNF (TrkC) -KO mice exhibit reduced sound-evoked suprathreshold ABR waves at the level of the auditory nerve (wave I) and inferior colliculus (IC) (wave IV), indicating that BDNF in lower brain regions but not in the auditory cortex improves sound sensitivity during hearing onset. Extracellular recording of IC neurons of BDNF (Pax2) mutant mice revealed that the reduced sensitivity of auditory fibers in these mice went hand in hand with elevated thresholds, reduced dynamic range, prolonged latency, and increased inhibitory strength in IC neurons. Reduced parvalbumin-positive contacts were found in the ascending auditory circuit, including the auditory cortex and hippocampus of BDNF (Pax2) -KO, but not of BDNF (TrkC) -KO mice. Also, BDNF (Pax2) -WT but not BDNF (Pax2) -KO mice did lose basal inhibitory strength in IC neurons after acoustic trauma. These findings suggest that BDNF in the lower parts of the auditory system drives auditory fidelity along the entire ascending pathway up to the cortex by increasing inhibitory strength in behaviorally relevant frequency regions. Fidelity and inhibitory strength can be lost following auditory nerve injury leading to diminished sensory outcome and increased central noise.

  10. Impairments of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  11. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    2015-01-01

    Full Text Available Noise induced hearing loss (NIHL remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL and complex velocity level (CVL, which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL filter to obtain velocities of basilar membrane (BM in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  12. Fatigue Modeling via Mammalian Auditory System for Prediction of Noise Induced Hearing Loss.

    Science.gov (United States)

    Sun, Pengfei; Qin, Jun; Campbell, Kathleen

    2015-01-01

    Noise induced hearing loss (NIHL) remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL) caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL) and complex velocity level (CVL), which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL) filter to obtain velocities of basilar membrane (BM) in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.

  13. Analogues of simple and complex cells in rhesus monkey auditory cortex.

    Science.gov (United States)

    Tian, Biao; Kuśmierek, Paweł; Rauschecker, Josef P

    2013-05-01

    Receptive fields (RFs) of neurons in primary visual cortex have traditionally been subdivided into two major classes: "simple" and "complex" cells. Simple cells were originally defined by the existence of segregated subregions within their RF that respond to either the on- or offset of a light bar and by spatial summation within each of these regions, whereas complex cells had ON and OFF regions that were coextensive in space [Hubel DH, et al. (1962) J Physiol 160:106-154]. Although other definitions based on the linearity of response modulation have been proposed later [Movshon JA, et al. (1978) J Physiol 283:53-77; Skottun BC, et al. (1991) Vision Res 31(7-8):1079-1086], the segregation of ON and OFF subregions has remained an important criterion for the distinction between simple and complex cells. Here we report that response profiles of neurons in primary auditory cortex of monkeys show a similar distinction: one group of cells has segregated ON and OFF subregions in frequency space; and another group shows ON and OFF responses within largely overlapping response profiles. This observation is intriguing for two reasons: (i) spectrotemporal dissociation in the auditory domain provides a basic neural mechanism for the segregation of sounds, a fundamental prerequisite for auditory figure-ground discrimination; and (ii) the existence of similar types of RF organization in visual and auditory cortex would support the existence of a common canonical processing algorithm within cortical columns.

  14. Data Collection and Analysis Techniques for Evaluating the Perceptual Qualities of Auditory Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.

    1998-11-17

    This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.

  15. Cross-Modal Plasticity Results in Increased Inhibition in Primary Auditory Cortical Areas

    Directory of Open Access Journals (Sweden)

    Yu-Ting Mao

    2013-01-01

    Full Text Available Loss of sensory input from peripheral organ damage, sensory deprivation, or brain damage can result in adaptive or maladaptive changes in sensory cortex. In previous research, we found that auditory cortical tuning and tonotopy were impaired by cross-modal invasion of visual inputs. Sensory deprivation is typically associated with a loss of inhibition. To determine whether inhibitory plasticity is responsible for this process, we measured pre- and postsynaptic changes in inhibitory connectivity in ferret auditory cortex (AC after cross-modal plasticity. We found that blocking GABAA receptors increased responsiveness and broadened sound frequency tuning in the cross-modal group more than in the normal group. Furthermore, expression levels of glutamic acid decarboxylase (GAD protein were increased in the cross-modal group. We also found that blocking inhibition unmasked visual responses of some auditory neurons in cross-modal AC. Overall, our data suggest a role for increased inhibition in reducing the effectiveness of the abnormal visual inputs and argue that decreased inhibition is not responsible for compromised auditory cortical function after cross-modal invasion. Our findings imply that inhibitory plasticity may play a role in reorganizing sensory cortex after cross-modal invasion, suggesting clinical strategies for recovery after brain injury or sensory deprivation.

  16. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  17. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  18. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  19. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  20. Deafness in cochlear and auditory nerve disorders.

    Science.gov (United States)

    Hopkins, Kathryn

    2015-01-01

    Sensorineural hearing loss is the most common type of hearing impairment worldwide. It arises as a consequence of damage to the cochlea or auditory nerve, and several structures are often affected simultaneously. There are many causes, including genetic mutations affecting the structures of the inner ear, and environmental insults such as noise, ototoxic substances, and hypoxia. The prevalence increases dramatically with age. Clinical diagnosis is most commonly accomplished by measuring detection thresholds and comparing these to normative values to determine the degree of hearing loss. In addition to causing insensitivity to weak sounds, sensorineural hearing loss has a number of adverse perceptual consequences, including loudness recruitment, poor perception of pitch and auditory space, and difficulty understanding speech, particularly in the presence of background noise. The condition is usually incurable; treatment focuses on restoring the audibility of sounds made inaudible by hearing loss using either hearing aids or cochlear implants.

  1. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    The auditory evoked potential (AEP) is an electrical signal that can be recorded from electrodes attached to the scalp of a human subject when a sound is presented. The signal is considered to reflect neural activity in response to the acoustic stimulation and is a well established clinical...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...... to simulate auditory brainstem responses (ABRs) evoked by classic stimuli like clicks, tone bursts and chirps. The ABRs to these simple stimuli were compared to literature data and the model was shown to predict the frequency dependence of tone-burst ABR wave-V latency and the level-dependence of ABR wave...

  2. Cognitive mechanisms associated with auditory sensory gating.

    Science.gov (United States)

    Jones, L A; Hills, P J; Dick, K M; Jones, S P; Bright, P

    2016-02-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification.

  3. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  4. Midbrain auditory selectivity to natural sounds.

    Science.gov (United States)

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  5. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  6. Brainstem auditory evoked response: application in neurology

    Directory of Open Access Journals (Sweden)

    Carlos A. M. Guerreiro

    1982-03-01

    Full Text Available The tecnique that we use for eliciting brainstem auditory evoked responses (BAERs is described. BAERs are a non-invasive and reliable clinical test when carefully performed. This test is indicated in the evaluation of disorders which may potentially involve the brainstem such as coma, multiple sclerosis posterior fossa tumors and others. Unsuspected lesions with normal radiologic studies (including CT-scan can be revealed by the BAER.

  7. Cognitive mechanisms associated with auditory sensory gating

    OpenAIRE

    Jones, L. A.; Hills, P.J.; Dick, K.M.; Jones, S. P.; Bright, P

    2015-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants addit...

  8. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  9. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  10. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  11. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  12. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  13. Auditory evoked potentials in peripheral vestibular disorder individuals

    Directory of Open Access Journals (Sweden)

    Matas, Carla Gentile

    2011-07-01

    Full Text Available Introduction: The auditory and vestibular systems are located in the same peripheral receptor, however they enter the CNS and go through different ways, thus creating a number of connections and reaching a wide area of the encephalon. Despite going through different ways, some changes can impair both systems. Such tests as Auditory Evoked Potentials can help find a diagnosis when vestibular alterations are seen. Objective: describe the Auditory Evoked Potential results in individuals complaining about dizziness or vertigo with Peripheral Vestibular Disorders and in normal individuals having the same complaint. Methods: Short, middle and long latency Auditory Evoked Potentials were performed as a transversal prospective study. Conclusion: individuals complaining about dizziness or vertigo can show some changes in BAEP (Brainstem Auditory Evoked Potential, MLAEP (Medium Latency Auditory Evoked Potential and P300.

  14. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  15. Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training.

    Science.gov (United States)

    Chen, Joyce L; Penhune, Virginia B; Zatorre, Robert J

    2008-02-01

    Much is known about the motor system and its role in simple movement execution. However, little is understood about the neural systems underlying auditory-motor integration in the context of musical rhythm, or the enhanced ability of musicians to execute precisely timed sequences. Using functional magnetic resonance imaging, we investigated how performance and neural activity were modulated as musicians and nonmusicians tapped in synchrony with progressively more complex and less metrically structured auditory rhythms. A functionally connected network was implicated in extracting higher-order features of a rhythm's temporal structure, with the dorsal premotor cortex mediating these auditory-motor interactions. In contrast to past studies, musicians recruited the prefrontal cortex to a greater degree than nonmusicians, whereas secondary motor regions were recruited to the same extent. We argue that the superior ability of musicians to deconstruct and organize a rhythm's temporal structure relates to the greater involvement of the prefrontal cortex mediating working memory.

  16. Use of a highly transparent zebrafish mutant for investigations in the development of the vertebrate auditory system (Conference Presentation)

    Science.gov (United States)

    Wisniowiecki, Anna M.; Mattison, Scott P.; Kim, Sangmin; Riley, Bruce; Applegate, Brian E.

    2016-03-01

    Zebrafish, an auditory specialist among fish, offer analogous auditory structures to vertebrates and is a model for hearing and deafness in vertebrates, including humans. Nevertheless, many questions remain on the basic mechanics of the auditory pathway. Phase-sensitive Optical Coherence Tomography has been proven as valuable technique for functional vibrometric measurements in the murine ear. Such measurements are key to building a complete understanding of auditory mechanics. The application of such techniques in the zebrafish is impeded by the high level of pigmentation, which develops superior to the transverse plane and envelops the auditory system superficially. A zebrafish double mutant for nacre and roy (mitfa-/- ;roya-/- [casper]), which exhibits defects for neural-crest derived melanocytes and iridophores, at all stages of development, is pursued to improve image quality and sensitivity for functional imaging. So far our investigations with the casper mutants have enabled the identification of the specialized hearing organs, fluid-filled canal connecting the ears, and sub-structures of the semicircular canals. In our previous work with wild-type zebrafish, we were only able to identify and observe stimulated vibration of the largest structures, specifically the anterior swim bladder and tripus ossicle, even among small, larval specimen, with fully developed inner ears. In conclusion, this genetic mutant will enable the study of the dynamics of the zebrafish ear from the early larval stages all the way into adulthood.

  17. Binaural technology for e.g. rendering auditory virtual environments

    DEFF Research Database (Denmark)

    Hammershøi, Dorte

    2008-01-01

    , helped mediate the understanding that if the transfer functions could be mastered, then important dimensions of the auditory percept could also be controlled. He early understood the potential of using the HRTFs and numerical sound transmission analysis programs for rendering auditory virtual...... environments. Jens Blauert participated in many European cooperation projects exploring  this field (and others), among other the SCATIS project addressing the auditory-tactile dimensions in the absence of visual information....

  18. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  19. [Auditory guidance systems for the visually impaired people].

    Science.gov (United States)

    He, Jing; Nie, Min; Luo, Lan; Tong, Shanbao; Niu, Jinhai; Zhu, Yisheng

    2010-04-01

    Visually impaired people face many inconveniences because of the loss of vision. Therefore, scientists are trying to design various guidance systems for improving the lives of the blind. Based on sensory substitution, auditory guidance has become an interesting topic in the field of biomedical engineering. In this paper, we made a state-of-technique review of the auditory guidance system. Although there have been many technical challenges, the auditory guidance system would be a useful alternative for the visually impaired people.

  20. Auditory cortex basal activity modulates cochlear responses in chinchillas.

    Directory of Open Access Journals (Sweden)

    Alex León

    Full Text Available BACKGROUND: The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. METHODOLOGY/PRINCIPAL FINDINGS: Cochlear microphonics (CM, auditory-nerve compound action potentials (CAP and auditory cortex evoked potentials (ACEP were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments and a permanent reduction in five chinchillas (lesion experiments. We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. CONCLUSIONS/SIGNIFICANCE: These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the

  1. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  2. Inhalation of hydrogen gas attenuates ouabaininduced auditory neuropathy in gerbils

    Institute of Scientific and Technical Information of China (English)

    Juan QU; Yun-na GAN; Ke-liang XIE; Wen-bo LIU; Ya-fei WANG; Ren-yi HEI; Wen-juan MI; Jian-hua QIU

    2012-01-01

    Aim:Auditory neuropathy (AN)is a hearing disorder characterized by abnormal auditory nerve function with preservation of normal cochlear hair cells.This study was designed to investigate whether treatment with molecular hydrogen (H2),which can remedy damage in various organs via reducing oxidative stress,inflammation and apoptosis,is beneficial to ouabain-induced AN in gerbils.Methods:AN model was made by local application of ouabain (1 mmoVL,20 mL)to the round window membrane in male Mongolian gerbils.H2 treatment was given twice by exposing the animals to H2 (1%,2%,and 4%)for 60 min at 1 h and 6 h after ouabain application.Before and 7 d after ouabain application,the hearing status of the animals was evaluated using the auditory brainstem response (ABR)approach,the hear cell function was evaluated with distortion product otoacoustic emissions (DPOAE).Seven days after ouabain application,the changes in the cochleae,especially the spiral ganglion neurons (SGNs),were morphologically studied.TUNEL staining and immunofluorescent staining for activated caspase-3 were used to assess the apoptosis of SGNs.Results:Treatment with H2 (2% and 4%)markedly attenuated the click and tone burst-evoked ABR threshold shift at 4,8,and 16 kHz in ouabain-exposed animals.Neither local ouabain application,nor H2 treatment changed the amplitude of DPOAE at 4,8,and 16 kHz.Morphological study showed that treatment with H2 (2%)significantly alleviated SGN damage and attenuated the loss of SGN density for each turn of cochlea in ouabain-exposed animals.Furthermore,ouabain caused significantly higher numbers of apoptotic SGNs in the cochlea,which was significantly attenuated by the H2 treatment.However,ouabain did not change the morphology of cochlear hair cells.Conclusion:The results demonstrate that H2 treatment is beneficial to ouabain-induced AN via reducing apoptosis.Thus,H2 might be a potential agent for treating hearing impairment in AN patients.

  3. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    Science.gov (United States)

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  4. Effect of auditory training on the middle latency response in children with (central auditory processing disorder

    Directory of Open Access Journals (Sweden)

    E. Schochat

    2010-08-01

    Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.

  5. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2015-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  6. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2016-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  7. Extrinsic sound stimulations and development of periphery auditory synapses

    Institute of Scientific and Technical Information of China (English)

    Kun Hou; Shiming Yang; Ke Liu

    2015-01-01

    The development of auditory synapses is a key process for the maturation of hearing function. However, it is still on debate regarding whether the development of auditory synapses is dominated by acquired sound stimulations. In this review, we summarize relevant publications in recent decades to address this issue. Most reported data suggest that extrinsic sound stimulations do affect, but not govern the development of periphery auditory synapses. Overall, periphery auditory synapses develop and mature according to its intrinsic mechanism to build up the synaptic connections between sensory neurons and/or interneurons.

  8. Evaluation of peripheral compression and auditory nerve fiber intensity coding using auditory steady-state responses

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; M. Harte, James; Epp, Bastian

    2015-01-01

    The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and, particularly, of outer-hair cell function. Another ability of the healthy auditory system is to enable communication in acoustical environments with high-level background noises....... Evaluation of these properties provides information about the health state of the system. It has been shown that a loss of outer hair cells leads to a reduction in peripheral compression. It has also recently been shown in animal studies that noise over-exposure, producing temporary threshold shifts, can...

  9. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  10. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices.

  11. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    Science.gov (United States)

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  12. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making.

  13. Simple ears-flexible behavior: Information processing in the moth auditory pathway

    Institute of Scientific and Technical Information of China (English)

    Gerit PFUHL; Blanka KALINOVA; Irena VALTEROVA; Bente G.BERG

    2015-01-01

    Lepidoptera evolved tympanic ears in response to echolocating bats.Comparative studies have shown that moth ears evolved many times independently from chordotonal organs.With only 1 to 4 receptor cells,they are one of the simplest hearing organs.The small number of receptors does not imply simplicity,neither in behavior nor in the neural circuit.Behaviorally,the response to ultrasound is far from being a simple reflex.Moths' escape behavior is modulated by a variety of cues,especially pheromones,which can alter the auditory response.Neurally the receptor cell(s) diverges onto many intemeurons,enabling pa rallel processing and feature extraction.Ascending interneurons and sound-sensitive brain neurons innervate a neuropil in the ventrolateral protocerebrum.Further,recent electrophysiological data provides the first glimpses into how the acoustic response is modulated as well as how ultrasound influences the other senses.So far,the auditory pathway has been studied in noctuids.The findings agree well with common computational principles found in other insects.However,moth ears also show unique mechanical and neural adaptation.Here,we first describe the variety of moths' auditory behavior,especially the co-option of ultrasonic signals for intraspecific communication.Second,we describe the current knowledge of the neural pathway gained from noctuid moths.Finally,we argue that Galleriinae which show negative and positive phonotaxis,are an interesting model species for future electrophysiological studies of the auditory pathway and multimodal sensory integration,and so are ideally suited for the study of the evolution of behavioral mechanisms given a few receptors [Current Zoology 61 (2):292-302,2015].

  14. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-05-01

    The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals.

  15. Type-2 diabetes mellitus and auditory brainstem response

    Directory of Open Access Journals (Sweden)

    Sheelu S Siddiqi

    2013-01-01

    Full Text Available Objective: Diabetes mellitus (DM causes pathophysiological changes at multiple organ system. With evoked potential techniques, the brain stem auditory response represents a simple procedure to detect both acoustic nerve and central nervous system pathway damage. The objective was to find the evidence of central neuropathy in diabetes patients by analyzing brainstem audiometry electric response obtained by auditory evoked potentials, quantify the characteristic of auditory brain response in long standing diabetes and to study the utility of auditory evoked potential in detecting the type, site, and nature of lesions. Design: A total of 25 Type-2 DM [13 (52% males and 12 (48% females] with duration of diabetes over 5 years and aged over 30 years. The brainstem evoked response audiometry (BERA was performed by universal smart box manual version 2.0 at 70, 80, and 90 dB. The wave latency pattern and interpeak latencies were estimated. This was compared with 25 healthy controls (17 [68%] males and 8 [32%] females. Result: In Type-2 DM, BERA study revealed that wave-III representing superior olivary complex at 80 dB had wave latency of (3.99 ± 0.24 ms P < 0.001, at 90 dB (3.92 ± 0.28 ms P < 0.001 compared with control. The latency of wave III was delayed by 0.39, 0.42, and 0.42 ms at 70, 80, and 90 dB, respectively. The absolute latency of wave V representing inferior colliculus at 70 dB (6.05 ± 0.27 ms P < 0.001, at 80 dB (5.98 ± 0.27 P < 0.001, and at 90 dB (6.02 ± 0.30 ms P < 0.002 compared with control. The latency of wave-V was delayed by 0.48, 0.47, and 0.50 ms at 70, 80, and 90 dB, respectively. Interlatencies I-III at 70 dB (2.33 ± 0.22 ms P < 0.001, at 80 dB (2.39 ± 0.26 ms P < 0.001, while at 90 dB (2.47 ± 0.25 ms P < 0.001 when compared with control. Interlatencies I-V at 70 dB (4.45 ± 0.29 ms P < 0.001 at 80 dB (4.39 ± 0.34 ms P < 0.001, and at 90 dB (4.57 ± 0.31 ms P < 0.001 compared with control. Out of 25 Type-2 DM, 13 (52

  16. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  17. Multiprofessional committee on auditory health: COMUSA.

    Science.gov (United States)

    Lewis, Doris Ruthy; Marone, Silvio Antonio Monteiro; Mendes, Beatriz C A; Cruz, Oswaldo Laercio Mendonça; Nóbrega, Manoel de

    2010-01-01

    Created in 2007, COMUSA is a multiprofessional committee comprising speech therapy, otology, otorhinolaryngology and pediatrics with the aim of debating and countersigning auditory health actions for neonatal, lactating, preschool and school children, adolescents, adults and elderly persons. COMUSA includes representatives of the Brazilian Audiology Academy (Academia Brasileira de Audiologia or ABA), the Brazilian Otorhinolaryngology and Cervicofacial Surgery Association (Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico Facial or ABORL), the Brazilian Phonoaudiology Society (Sociedade Brasileira de Fonoaudiologia or SBFa), the Brazilian Otology Society (Sociedade Brasileira de Otologia or SBO), and the Brazilian Pediatrics Society (Sociedade Brasileira de Pediatria or SBP).

  18. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000....... PATIENTS: Ten women and 10 men with previously untreated primary cancer. Median age at diagnosis was 67 years (range, 31-87 years). Survival data included 18 patients with at least 2 years of follow-up or recurrence. INTERVENTION: Local canal resection or partial temporal bone resection. MAIN OUTCOME...

  19. CAVERNOUS HEMANGIOMA OF THE INTERNAL AUDITORY CANAL

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Hekmatara

    1993-06-01

    Full Text Available Cavernous hemangioma is a rare benign tumor of the internal auditory canal (IAC of which fourteen cases have been reported so far."nTinnitus and progressive sensorineural hearing loss (SNHL are the chief complaints of the patients. Audiological and radiological planes, CTScan, and magnetic resonance image (MRI studies are helpful in diagnosis. The only choice of treatment is surgery with elective transmastoid trans¬labyrinthine approach. And if tumor is very large, the method of choice will be retrosigmoid approach.

  20. Comparison of auditory hallucinations across different disorders and syndromes

    NARCIS (Netherlands)

    Sommer, Iris E. C.; Koops, Sanne; Blom, Jan Dirk

    2012-01-01

    Auditory hallucinations can be experienced in the context of many different disorders and syndromes. The differential diagnosis basically rests on the presence or absence of accompanying symptoms. In terms of clinical relevance, the most important distinction to be made is between auditory hallucina

  1. Development of a central auditory test battery for adults.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Stollman, M.H.P.; Snik, A.F.M.; Broek, P. van den

    2001-01-01

    There is little standardized test material in Dutch to document central auditory processing disorders (CAPDs). Therefore, a new central auditory test battery was composed and standardized for use with adult populations and older children. The test battery comprised seven tests (words in noise, filte

  2. Deactivation of the Parahippocampal Gyrus Preceding Auditory Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    Diederen, Kelly M. J.; Neggers, Sebastiaan F. W.; Daalman, Kirstin; Blom, Jan Dirk; Goekoop, Rutger; Kahn, Rene S.; Sommer, Iris E. C.

    2010-01-01

    Objective: Activation in a network of language-related regions has been reported during auditory verbal hallucinations. It remains unclear, however, how this activation is triggered. Identifying brain regions that show significant signal changes preceding auditory hallucinations might reveal the ori

  3. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  4. Auditory Processing Theories of Language Disorders: Past, Present, and Future

    Science.gov (United States)

    Miller, Carol A.

    2011-01-01

    Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…

  5. Source reliability in auditory health persuasion : Its antecedents and consequences

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie

    2015-01-01

    Persuasive health messages can be presented through an auditory channel, thereby enhancing the salience of the source, making it fundamentally different from written or pictorial information. We focused on the determinants of perceived source reliability in auditory health persuasion by investigatin

  6. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  7. Strategy choice mediates the link between auditory processing and spelling.

    Science.gov (United States)

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  8. Entrainment to an auditory signal: Is attention involved?

    NARCIS (Netherlands)

    Kunert, R.; Jongman, S.R.

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhy

  9. Cortical Auditory Evoked Potentials in Unsuccessful Cochlear Implant Users

    Science.gov (United States)

    Munivrana, Boska; Mildner, Vesna

    2013-01-01

    In some cochlear implant users, success is not achieved in spite of optimal clinical factors (including age at implantation, duration of rehabilitation and post-implant hearing level), which may be attributed to disorders at higher levels of the auditory pathway. We used cortical auditory evoked potentials to investigate the ability to perceive…

  10. Auditory signal design for automatic number plate recognition system

    NARCIS (Netherlands)

    Heydra, C.G.; Jansen, R.J.; Van Egmond, R.

    2014-01-01

    This paper focuses on the design of an auditory signal for the Automatic Number Plate Recognition system of Dutch national police. The auditory signal is designed to alert police officers of suspicious cars in their proximity, communicating priority level and location of the suspicious car and takin

  11. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James;

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  12. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  13. Functional outcome of auditory implants in hearing loss.

    Science.gov (United States)

    Di Girolamo, S; Saccoccio, A; Giacomini, P G; Ottaviani, F

    2007-01-01

    The auditory implant provides a new mechanism for hearing when a hearing aid is not enough. It is the only medical technology able to functionally restore a human sense i.e. hearing. The auditory implant is very different from a hearing aid. Hearing aids amplify sound. Auditory implants compensate for damaged or non-working parts of the inner ear because they can directly stimulate the acoustic nerve. There are two principal types of auditory implant: the cochlear implant and the auditory brainstem implant. They have common basic characteristics, but different applications. A cochlear implant attempts to replace a function lost by the cochlea, usually due to an absence of functioning hair cells; the auditory brainstem implant (ABI) is a modification of the cochlear implant, in which the electrode array is placed directly into the brain when the acoustic nerve is not anymore able to carry the auditory signal. Different types of deaf or severely hearing-impaired patients choose auditory implants. Both children and adults can be candidates for implants. The best age for implantation is still being debated, but most children who receive implants are between 2 and 6 years old. Earlier implantation seems to perform better thanks to neural plasticity. The decision to receive an implant should involve a discussion with many medical specialists and an experienced surgeon.

  14. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    Science.gov (United States)

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  15. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a bas

  16. Auditory Dysfunction and Its Communicative Impact in the Classroom.

    Science.gov (United States)

    Friedrich, Brad W.

    1982-01-01

    The origins and nature of auditory dysfunction in school age children and the role of the audiologist in the evaluation of the learning disabled child are reviewed. Specific structures and mechanisms responsible for the reception and perception of auditory signals are specified. (Author/SEW)

  17. Auditory perceptual simulation: Simulating speech rates or accents?

    Science.gov (United States)

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  18. Use of auditory learning to manage listening problems in children.

    Science.gov (United States)

    Moore, David R; Halliday, Lorna F; Amitay, Sygal

    2009-02-12

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.

  19. Auditory Backward Masking Deficits in Children with Reading Disabilities

    Science.gov (United States)

    Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.

    2005-01-01

    Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…

  20. A Pilot Study of Auditory Integration Training in Autism.

    Science.gov (United States)

    Rimland, Bernard; Edelson, Stephen M.

    1995-01-01

    The effectiveness of Auditory Integration Training (AIT) in 8 autistic individuals (ages 4-21) was evaluated using repeated multiple criteria assessment over a 3-month period. Compared to matched controls, subjects' scores improved on the Aberrant Behavior Checklist and Fisher's Auditory Problems Checklist. AIT did not decrease sound sensitivity.…

  1. Quantification of the auditory startle reflex in children

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; van der Meer, Johan N.; Koelman, Johannes H. T. M.; Boeree, Thijs; Bour, Lo; Tijssen, Marina A. J.

    2009-01-01

    Objective: To find an adequate tool to assess the auditory startle reflex (ASR) in children. Methods: We investigated the effect of stimulus repetition, gender and age on several quantifications of the ASR. ASR's were elicited by eight consecutive auditory stimuli in 27 healthy children. Electromyog

  2. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  3. Perceptual Load Influences Auditory Space Perception in the Ventriloquist Aftereffect

    Science.gov (United States)

    Eramudugolla, Ranmalee; Kamke, Marc. R.; Soto-Faraco, Salvador; Mattingley, Jason B.

    2011-01-01

    A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the "ventriloquist aftereffect", reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their…

  4. Evolution and function of auditory systems in insects

    Science.gov (United States)

    Stumpner, A.; von Helversen, D.

    2001-05-01

    While the sensing of substrate vibrations is common among arthropods, the reception of sound pressure waves is an adaptation restricted to insects, which has arisen independently several times in different orders. Wherever studied, tympanal organs were shown to derive from chordotonal precursors, which were modified such that mechanosensitive scolopidia became attached to thin cuticular membranes backed by air-filled tracheal cavities (except in lacewings). The behavioural context in which hearing has evolved has strongly determined the design and properties of the auditory system. Hearing organs which have evolved in the context of predator avoidance are highly sensitive, preferentially in a broad range of ultrasound frequencies, which release rapid escape manoeuvres. Hearing in the context of communication does not only require recognition and discrimination of highly specific song patterns but also their localisation. Typically, the spectrum of the conspecific signals matches the best sensitivity of the receiver. Directionality is achieved by means of sophisticated peripheral structures and is further enhanced by neuronal processing. Side-specific gain control typically allows the insect to encode the loudest signal on each side. The filtered information is transmitted to the brain, where the final steps of pattern recognition and localisation occur. The outputs of such filter networks, modulated or gated by further processes (subsumed by the term motivation), trigger command neurones for specific behaviours. Altogether, the many improvements opportunistically evolved at any stage of acoustic information-processing ultimately allow insects to come up with astonishing acoustic performances similar to those achieved by vertebrates.

  5. Parcellation of Human and Monkey Core Auditory Cortex with fMRI Pattern Classification and Objective Detection of Tonotopic Gradient Reversals.

    Science.gov (United States)

    Schönwiesner, Marc; Dechent, Peter; Voit, Dirk; Petkov, Christopher I; Krumbholz, Katrin

    2015-10-01

    Auditory cortex (AC) contains several primary-like, or "core," fields, which receive thalamic input and project to non-primary "belt" fields. In humans, the organization and layout of core and belt auditory fields are still poorly understood, and most auditory neuroimaging studies rely on macroanatomical criteria, rather than functional localization of distinct fields. A myeloarchitectonic method has been suggested recently for distinguishing between core and belt fields in humans (Dick F, Tierney AT, Lutti A, Josephs O, Sereno MI, Weiskopf N. 2012. In vivo functional and myeloarchitectonic mapping of human primary auditory areas. J Neurosci. 32:16095-16105). We propose a marker for core AC based directly on functional magnetic resonance imaging (fMRI) data and pattern classification. We show that a portion of AC in Heschl's gyrus classifies sound frequency more accurately than other regions in AC. Using fMRI data from macaques, we validate that the region where frequency classification performance is significantly above chance overlaps core auditory fields, predominantly A1. Within this region, we measure tonotopic gradients and estimate the locations of the human homologues of the core auditory subfields A1 and R. Our results provide a functional rather than anatomical localizer for core AC. We posit that inter-individual variability in the layout of core AC might explain disagreements between results from previous neuroimaging and cytological studies.

  6. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA......, 1982, 416-425]. When a “classical” gammatone filterbank was applied within this spectrum-based model, the model largely underestimated human performance at high signal frequencies. However, this limitation could be resolved by employing an auditory filterbank with narrower filters. This novel...

  7. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  8. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  9. JP-8 jet fuel can promote auditory impairment resulting from subsequent noise exposure in rats.

    Science.gov (United States)

    Fechter, Laurence D; Gearhart, Caroline; Fulton, Sherry; Campbell, Jerry; Fisher, Jeffrey; Na, Kwangsam; Cocker, David; Nelson-Miller, Alisa; Moon, Patrick; Pouyatos, Benoit

    2007-08-01

    We report on the transient and persistent effects of JP-8 jet fuel exposure on auditory function in rats. JP-8 has become the standard jet fuel utilized in the United States and North Atlantic Treaty Organization countries for military use and it is closely related to Jet A fuel, which is used in U.S. domestic aviation. Rats received JP-8 fuel (1000 mg/m(3)) by nose-only inhalation for 4 h and half of them were immediately subjected to an octave band of noise ranging between 97 and 105 dB in different experiments. The noise by itself produces a small, but permanent auditory impairment. The current permissible exposure level for JP-8 is 350 mg/m(3). Additionally, a positive control group received only noise exposure, and a fourth group consisted of untreated control subjects. Exposures occurred either on 1 day or repeatedly on 5 successive days. Impairments in auditory function were assessed using distortion product otoacoustic emissions and compound action potential testing. In other rats, tissues were harvested following JP-8 exposure for assessment of hydrocarbon levels or glutathione (GSH) levels. A single JP-8 exposure by itself at 1000 mg/m(3) did not disrupt auditory function. However, exposure to JP-8 and noise produced an additive disruption in outer hair cell function. Repeated 5-day JP-8 exposure at 1000 mg/m(3) for 4 h produced impairment of outer hair cell function that was most evident at the first postexposure assessment time. Partial though not complete recovery was observed over a 4-week postexposure period. The adverse effects of repeated JP-8 exposures on auditory function were inconsistent, but combined treatment with JP-8 + noise yielded greater impairment of auditory function, and hair cell loss than did noise by itself. Qualitative comparison of outer hair cell loss suggests an increase in outer hair cell death among rats treated with JP-8 + noise for 5 days as compared to noise alone. In most instances, hydrocarbon constituents of the fuel

  10. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    Science.gov (United States)

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  11. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  12. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  13. Auditory discrimination of force of impact.

    Science.gov (United States)

    Lutfi, Robert A; Liu, Ching-Ju; Stoelinga, Christophe N J

    2011-04-01

    The auditory discrimination of force of impact was measured for three groups of listeners using sounds synthesized according to first-order equations of motion for the homogenous, isotropic bar [Morse and Ingard (1968). Theoretical Acoustics pp. 175-191]. The three groups were professional percussionists, nonmusicians, and individuals recruited from the general population without regard to musical background. In the two-interval, forced-choice procedure, listeners chose the sound corresponding to the greater force of impact as the length of the bar varied from one presentation to the next. From the equations of motion, a maximum-likelihood test for the task was determined to be of the form Δlog A + αΔ log f > 0, where A and f are the amplitude and frequency of any one partial and α = 0.5. Relative decision weights on Δ log f were obtained from the trial-by-trial responses of listeners and compared to α. Percussionists generally outperformed the other groups; however, the obtained decision weights of all listeners deviated significantly from α and showed variability within groups far in excess of the variability associated with replication. Providing correct feedback after each trial had little effect on the decision weights. The variability in these measures was comparable to that seen in studies involving the auditory discrimination of other source attributes.

  14. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction.

  15. Intrinsic modulators of auditory thalamocortical transmission.

    Science.gov (United States)

    Lee, Charles C; Sherman, S Murray

    2012-05-01

    Neurons in layer 4 of the primary auditory cortex receive convergent glutamatergic inputs from thalamic and cortical projections that activate different groups of postsynaptic glutamate receptors. Of particular interest in layer 4 neurons are the Group II metabotropic glutamate receptors (mGluRs), which hyperpolarize neurons postsynaptically via the downstream opening of GIRK channels. This pronounced effect on membrane conductance could influence the neuronal processing of synaptic inputs, such as those from the thalamus, essentially modulating information flow through the thalamocortical pathway. To examine how Group II mGluRs affect thalamocortical transmission, we used an in vitro slice preparation of the auditory thalamocortical pathways in the mouse to examine synaptic transmission under conditions where Group II mGluRs were activated. We found that both pre- and post-synaptic Group II mGluRs are involved in the attenuation of thalamocortical EPSP/Cs. Thus, thalamocortical synaptic transmission is suppressed via the presynaptic reduction of thalamocortical neurotransmitter release and the postsynaptic inhibition of the layer 4 thalamorecipient neurons. This could enable the thalamocortical pathway to autoregulate transmission, via either a gating or gain control mechanism, or both.

  16. Auditory evoked potentials in postconcussive syndrome.

    Science.gov (United States)

    Drake, M E; Weate, S J; Newell, S A

    1996-12-01

    The neuropsychiatric sequelae of minor head trauma have been the source of controversy. Most clinical and imaging studies have shown no alteration after concussion, but neuropsychological and neuropathological abnormalities have been reported. Some changes in neurophysiologic diagnostic tests have been described in postconcussive syndrome. We recorded middle latency auditory evoked potentials (MLR) and slow vertex responses (SVR) in 20 individuals with prolonged cognitive difficulties, behavior changes, dizziness, and headache after concussion. MLR is utilized alternating polarity clicks presented monaurally at 70 dB SL at 4 per second, with 40 dB contralateral masking. Five hundred responses were recorded and replicated from Cz-A1 and Cz-A2, with 50 ms. analysis time and 20-1000 Hz filter band pass. SVRs were recorded with the same montage, but used rarefaction clicks, 0.5 Hz stimulus rate, 500 ms. analysis time, and 1-50 Hz filter band pass. Na and Pa MLR components were reduced in amplitude in postconcussion patients. Pa latency was significantly longer in patients than in controls. SVR amplitudes were longer in concussed individuals, but differences in latency and amplitude were not significant. These changes may reflect posttraumatic disturbance in presumed subcortical MLR generators, or in frontal or temporal cortical structures that modulate them. Middle and long-latency auditory evoked potentials may be helpful in the evaluation of postconcussive neuropsychiatric symptoms.

  17. Auditory verbal hallucinations: neuroimaging and treatment.

    Science.gov (United States)

    Bohlken, M M; Hugdahl, K; Sommer, I E C

    2017-01-01

    Auditory verbal hallucinations (AVH) are a frequently occurring phenomenon in the general population and are considered a psychotic symptom when presented in the context of a psychiatric disorder. Neuroimaging literature has shown that AVH are subserved by a variety of alterations in brain structure and function, which primarily concentrate around brain regions associated with the processing of auditory verbal stimuli and with executive control functions. However, the direction of association between AVH and brain function remains equivocal in certain research areas and needs to be carefully reviewed and interpreted. When AVH have significant impact on daily functioning, several efficacious treatments can be attempted such as antipsychotic medication, brain stimulation and cognitive-behavioural therapy. Interestingly, the neural correlates of these treatments largely overlap with brain regions involved in AVH. This suggests that the efficacy of treatment corresponds to a normalization of AVH-related brain activity. In this selected review, we give a compact yet comprehensive overview of the structural and functional neuroimaging literature on AVH, with a special focus on the neural correlates of efficacious treatment.

  18. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  19. Talker-specific auditory imagery during reading

    Science.gov (United States)

    Nygaard, Lynne C.; Duke, Jessica; Kawar, Kathleen; Queen, Jennifer S.

    2004-05-01

    The present experiment was designed to determine if auditory imagery during reading includes talker-specific characteristics such as speaking rate. Following Kosslyn and Matt (1977), participants were familiarized with two talkers during a brief prerecorded conversation. One talker spoke at a fast speaking rate and one spoke at a slow speaking rate. During familiarization, participants were taught to identify each talker by name. At test, participants were asked to read two passages and told that either the slow or fast talker wrote each passage. In one condition, participants were asked to read each passage aloud, and in a second condition, they were asked to read each passage silently. Participants pressed a key when they had completed reading the passage, and reading times were collected. Reading times were significantly slower when participants thought they were reading a passage written by the slow talker than when reading a passage written by the fast talker. However, the effects of speaking rate were only present in the reading-aloud condition. Additional experiments were conducted to investigate the role of attention to talker's voice during familiarization. These results suggest that readers may engage in auditory imagery while reading that preserves perceptual details of an author's voice.

  20. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  1. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  2. Central projection of auditory receptors in the prothoracic ganglion of the buschcricket Psorodonotus illyricus (tettigoniidae): computer-aided analysis of the end branch pattern.

    Science.gov (United States)

    Ebendt, R; Friedel, J; Kalmring, K

    1994-01-01

    The projection patterns of morphologically and functionally identified auditory and auditory-vibratory receptor cells of receptor organs (the crista acustica and the intermediate organ) in the foreleg of the tettigoniid Psorodonotus illyricus, were investigated with combined recording and staining techniques, and subsequent histological examination and morphometric measurements. With the application of a computer program (AutoCAD), three-dimensional reconstructions of the axon end branches of receptor cells within the neuropile of the anterior Ring Tract (aRT) were made, in order to determine, the entire shape of each, the pattern and density of the end branches, and the positions of the target areas within the auditory neuropile. Clear differences for different functional types of receptors were found.

  3. Bilateral Collicular Interaction: Modulation of Auditory Signal Processing in Amplitude Domain

    Science.gov (United States)

    Fu, Zi-Ying; Wang, Xin; Jen, Philip H.-S.; Chen, Qi-Cai

    2012-01-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from many lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and from the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in modulating amplitude-domain signal processing using electrophysiological recording, acoustic and focal electrical stimulation. Focal electrical stimulation of one (ipsilateral) IC produces widespread inhibition (61.6%) and focused facilitation (9.1%) of responses of neurons in the other (contralateral) IC, while 29.3% of the neurons were not affected. Bilateral collicular interaction produces a decrease in the response magnitude and an increase in the response latency of inhibited IC neurons but produces opposite effects on the response of facilitated IC neurons. These two groups of neurons are not separately located and are tonotopically organized within the IC. The modulation effect is most effective at low sound level and is dependent upon the interval between the acoustic and electric stimuli. The focal electrical stimulation of the ipsilateral IC compresses or expands the rate-level functions of contralateral IC neurons. The focal electrical stimulation also produces a shift in the minimum threshold and dynamic range of contralateral IC neurons for as long as 150 minutes. The degree of bilateral collicular interaction is dependent upon the difference in the best frequency between the electrically stimulated IC neurons and modulated IC neurons. These data suggest that bilateral collicular interaction mainly changes the ratio between excitation and inhibition during signal processing so as to sharpen the amplitude sensitivity of IC neurons. Bilateral interaction may be also involved in acoustic

  4. Behavioral detection of intra-cortical microstimulation in the primary and secondary auditory cortex of cats

    Directory of Open Access Journals (Sweden)

    Zhenling eZhao

    2015-04-01

    Full Text Available Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS delivered in the primary and secondary auditory fields (A1 and A2, respectively. After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics.

  5. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  6. The Study of Frequency Self Care Strategies against Auditory Hallucinations

    Directory of Open Access Journals (Sweden)

    Mahin Nadem

    2012-03-01

    Full Text Available Background: In schizophrenic clients, self-care strategies against auditory hallucinations can decrease disturbances results in hallucination. This study was aimed to assess frequency of self-care strategies against auditory hallucinations in paranoid schizophrenic patients, hospitalized in Shafa Hospital.Materials and Method: This was a descriptive study on 201 patients with paranoid schizophrenia hospitalized in psychiatry unit with convenience sampling in Rasht. The gathered data consists of two parts, first unit demographic characteristic and the second part, self- report questionnaire include 38 items about self-care strategies.Results: There were statistically significant relationship between demographic variables and knowledg effect and self-care strategies against auditory hallucinaions. Sex with phisical domain p0.07, marriage status with cognitive domain (p>0.07 and life status with behavioural domain (p>0.01. 53.2% of reported type of our auditory hallucinations were command hallucinations, furtheremore the most effective self-care strategies against auditory hallucinations were from physical domain and substance abuse (82.1% was the most effective strategies in this domain.Conclusion: The client with paranoid schizophrenia used more than physical domain strategies against auditory hallucinaions and this result highlight need those to approprait nursing intervention. Instruction and leading about selection the effective self-care strategies against auditory ha

  7. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  8. Auditory-perceptual learning improves speech motor adaptation in children.

    Science.gov (United States)

    Shiller, Douglas M; Rochon, Marie-Lyne

    2014-08-01

    Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.

  9. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record

  10. A corollary discharge maintains auditory sensitivity during sound production.

    Science.gov (United States)

    Poulet, James F A; Hedwig, Berthold

    2002-08-22

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  11. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  12. Cochlear Responses and Auditory Brainstem Response Functions in Adults with Auditory Neuropathy/ Dys-Synchrony and Individuals with Normal Hearing

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2007-06-01

    Full Text Available Background and Aim: Physiologic measures of cochlear and auditory nerve function may be of assis¬tance in distinguishing between hearing disorders due primarily to auditory nerve impairment from those due primarily to cochlear hair cells dysfunction. The goal of present study was to measure of co-chlear responses (otoacoustic emissions and cochlear microphonics and auditory brainstem response in some adults with auditory neuropathy/ dys-synchrony and subjects with normal hearing. Materials and Methods: Patients were 16 adults (32 ears in age range of 14-30 years with auditory neu¬ropathy/ dys-synchrony and 16 individuals in age range of 16-30 years from both sexes. The results of transient otoacoustic emissions, cochlear microphonics and auditory brainstem response measures were compared in both groups and the effects of age, sex, ear and degree of hearing loss were studied. Results: The pure-tone average was 48.1 dB HL in auditory neuropathy/dys-synchrony group and the fre¬quency of low tone loss and flat audiograms were higher among other audiogram's shapes. Transient oto¬acoustic emissions were shown in all auditory neuropathy/dys-synchrony people except two cases and its average was near in both studied groups. The latency and amplitude of the biggest reversed co-chlear microphonics response were higher in auditory neuropathy/dys-synchrony patients than control peo¬ple significantly. The correlation between cochlear microphonics amplitude and degree of hearing loss was not significant, and age had significant effect in some cochlear microphonics measures. Audi-tory brainstem response had no response in auditory neuropathy/dys-synchrony patients even with low stim¬uli rates. Conclusion: In adults with speech understanding worsen than predicted from the degree of hearing loss that suspect to auditory neuropathy/ dys-synchrony, the frequency of low tone loss and flat audiograms are higher. Usually auditory brainstem response is absent in

  13. Diffusion tensor imaging and MR morphometry of the central auditory pathway and auditory cortex in aging.

    Science.gov (United States)

    Profant, O; Škoch, A; Balogová, Z; Tintěra, J; Hlinka, J; Syka, J

    2014-02-28

    Age-related hearing loss (presbycusis) is caused mainly by the hypofunction of the inner ear, but recent findings point also toward a central component of presbycusis. We used MR morphometry and diffusion tensor imaging (DTI) with a 3T MR system with the aim to study the state of the central auditory system in a group of elderly subjects (>65years) with mild presbycusis, in a group of elderly subjects with expressed presbycusis and in young controls. Cortical reconstruction, volumetric segmentation and auditory pathway tractography were performed. Three parameters were evaluated by morphometry: the volume of the gray matter, the surface area of the gyrus and the thickness of the cortex. In all experimental groups the surface area and gray matter volume were larger on the left side in Heschl's gyrus and planum temporale and slightly larger in the gyrus frontalis superior, whereas they were larger on the right side in the primary visual cortex. Almost all of the measured parameters were significantly smaller in the elderly subjects in Heschl's gyrus, planum temporale and gyrus frontalis superior. Aging did not change the side asymmetry (laterality) of the gyri. In the central part of the auditory pathway above the inferior colliculus, a trend toward an effect of aging was present in the axial vector of the diffusion (L1) variable of DTI, with increased values observed in elderly subjects. A trend toward a decrease of L1 on the left side, which was more pronounced in the elderly groups, was observed. The effect of hearing loss was present in subjects with expressed presbycusis as a trend toward an increase of the radial vectors (L2L3) in the white matter under Heschl's gyrus. These results suggest that in addition to peripheral changes, changes in the central part of the auditory system in elderly subjects are also present; however, the extent of hearing loss does not play a significant role in the central changes.

  14. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials

    OpenAIRE

    Calderón-Garcidueñas, Lilian; D’Angiulli, Amedeo; Kulesza, Randy J.; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M.; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-01-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3± 8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p

  15. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  16. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  17. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  18. Tiapride for the treatment of auditory hallucinations in schizophrenia

    Directory of Open Access Journals (Sweden)

    Sagar Karia

    2013-01-01

    Full Text Available Hallucinations are considered as core symptoms of psychosis by both International Classification of Diseases-10 (ICD-10 and Diagnostic and Statistical Manual for the Classification of Psychiatric Disorders - 4 th edition text revised (DSM-IV TR. The most common types of hallucinations in patients with schizophrenia are auditory in nature followed by visual hallucinations. Few patients with schizophrenia have persisting auditory hallucinations despite all other features of schizophrenia having being improved. Here, we report two cases where tiapride was useful as an add-on drug for treating persistent auditory hallucinations.

  19. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien;

    2016-01-01

    whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...... decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between...

  20. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... environments (VAEs) with an array of loudspeakers. The LoRA system combines state-of-the-art acoustic room models with sound-field reproduction techniques. Limitations of these two techniques were taken into consideration together with the limitations of the human auditory system to localize sounds...

  1. Frequency tuning of individual auditory receptors in female mosquitoes (Diptera, Culicidae).

    Science.gov (United States)

    Lapshin, D N; Vorontsov, D D

    2013-08-01

    The acoustic sensory organs in mosquitoes (Johnston organs) have been thoroughly studied; yet, to date, no data are available on the individual tuning properties of the numerous receptors that convert sound-induced vibrations into electrical signals. All previous measurements of frequency tuning in mosquitoes have been based on the acoustically evoked field potentials recorded from the entire Johnston organ. Here, we present evidence that individual receptors have various frequency tunings and that differently tuned receptors are unequally represented within the Johnston organ. We devised a positive feedback stimulation paradigm as a new and effective approach to test individual receptor properties. Alongside the glass microelectrode technique, the positive feedback stimulation paradigm has allowed us to obtain data on receptor tuning in females from three mosquito species: Anopheles messeae, Aedes excrucians and Culex pipiens pipiens. The existence of individually tuned auditory receptors implies that frequency analysis in mosquitoes may be possible.

  2. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine...... the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using...... in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis...

  3. A computer model of auditory stream segregation.

    Science.gov (United States)

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  4. Hearing Restoration with Auditory Brainstem Implant

    Science.gov (United States)

    NAKATOMI, Hirofumi; MIYAWAKI, Satoru; KIN, Taichi; SAITO, Nobuhito

    2016-01-01

    Auditory brainstem implant (ABI) technology attempts to restore hearing in deaf patients caused by bilateral cochlear nerve injury through the direct stimulation of the brainstem, but many aspects of the related mechanisms remain unknown. The unresolved issues can be grouped into three topics: which patients are the best candidates; which type of electrode should be used; and how to improve restored hearing. We evaluated our experience with 11 cases of ABI placement. We found that if at least seven of eleven electrodes of the MED-EL ABI are effectively placed in a patient with no deformation of the fourth ventricle, open set sentence recognition of approximately 20% and closed set word recognition of approximately 65% can be achieved only with the ABI. Appropriate selection of patients for ABI placement can lead to good outcomes. Further investigation is required regarding patient selection criteria and methods of surgery for effective ABI placement. PMID:27464470

  5. Changes of brainstem auditory and somatosensory evoked

    Institute of Scientific and Technical Information of China (English)

    Yang Jian

    2000-01-01

    Objective: to investigate the characteristics and clinical value of evoked potentials in late infantile form of metachromatic leukodystrophy. Methods: Brainstem auditory, and somatosensory evoked potentials were recorded in 6 patients, and compared with the results of CT scan. Results: All of the 6 patients had abnormal results of BAEP and MNSEP. The main abnormal parameters in BAEP were latency prolongation in wave I, inter-peak latency prolongation in Ⅰ-Ⅲ and Ⅰ-Ⅴ. The abnormal features of MNSEP were low amplitude and absence of wave N9, inter-Peak latency prolongation in Ng-N13 and N13-N20, but no significant change of N20 amplitude. The results also revealed that abnormal changes in BAEP and MNSEP were earlier than that in CT. Conclusion: The detection of BAEP and MNSEP in late infantile form of metachromatic leukodystrophy might early reveal the abnormality of conductive function in nervous system and might be a useful method in diagnosis.

  6. Discrimination of auditory stimuli during isoflurane anesthesia.

    Science.gov (United States)

    Rojas, Manuel J; Navas, Jinna A; Greene, Stephen A; Rector, David M

    2008-10-01

    Deep isoflurane anesthesia initiates a burst suppression pattern in which high-amplitude bursts are preceded by periods of nearly silent electroencephalogram. The burst suppression ratio (BSR) is the percentage of suppression (silent electroencephalogram) during the burst suppression pattern and is one parameter used to assess anesthesia depth. We investigated cortical burst activity in rats in response to different auditory stimuli presented during the burst suppression state. We noted a rapid appearance of bursts and a significant decrease in the BSR during stimulation. The BSR changes were distinctive for the different stimuli applied, and the BSR decreased significantly more when stimulated with a voice familiar to the rat as compared with an unfamiliar voice. These results show that the cortex can show differential sensory responses during deep isoflurane anesthesia.

  7. Low power adder based auditory filter architecture.

    Science.gov (United States)

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  8. Resting Heart Rate and Auditory Evoked Potential

    Directory of Open Access Journals (Sweden)

    Simone Fiuza Regaçone

    2015-01-01

    Full Text Available The objective of this study was to evaluate the association between rest heart rate (HR and the components of the auditory evoked-related potentials (ERPs at rest in women. We investigated 21 healthy female university students between 18 and 24 years old. We performed complete audiological evaluation and measurement of heart rate for 10 minutes at rest (heart rate monitor Polar RS800CX and performed ERPs analysis (discrepancy in frequency and duration. There was a moderate negative correlation of the N1 and P3a with rest HR and a strong positive correlation of the P2 and N2 components with rest HR. Larger components of the ERP are associated with higher rest HR.

  9. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  10. Genetics of auditory mechano-electrical transduction.

    Science.gov (United States)

    Michalski, Nicolas; Petit, Christine

    2015-01-01

    The hair bundles of cochlear hair cells play a central role in the auditory mechano-electrical transduction (MET) process. The identification of MET components and of associated molecular complexes by biochemical approaches is impeded by the very small number of hair cells within the cochlea. In contrast, human and mouse genetics have proven to be particularly powerful. The study of inherited forms of deafness led to the discovery of several essential proteins of the MET machinery, which are currently used as entry points to decipher the associated molecular networks. Notably, MET relies not only on the MET machinery but also on several elements ensuring the proper sound-induced oscillation of the hair bundle or the ionic environment necessary to drive the MET current. Here, we review the most significant advances in the molecular bases of the MET process that emerged from the genetics of hearing.

  11. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  12. Effects of pitch on auditory number comparisons.

    Science.gov (United States)

    Campbell, Jamie I D; Scheepers, Florence

    2015-05-01

    Three experiments investigated interactions between auditory pitch and the numerical quantities represented by spoken English number words. In Experiment 1, participants heard a pair of sequential auditory numbers in the range zero to ten. They pressed a left-side or right-side key to indicate if the second number was lower or higher in numerical value. The vocal pitches of the two numbers either ascended or descended so that pitch change was congruent or incongruent with number change. The error rate was higher when pitch and number were incongruent relative to congruent trials. The distance effect on RT (i.e., slower responses for numerically near than far number pairs) occurred with pitch ascending but not descending. In Experiment 2, to determine if these effects depended on the left/right spatial mapping of responses, participants responded "yes" if the second number was higher and "no" if it was lower. Again, participants made more number comparison errors when number and pitch were incongruent, but there was no distance × pitch order effect. To pursue the latter, in Experiment 3, participants were tested with response buttons assigned left-smaller and right-larger ("normal" spatial mapping) or the reverse mapping. Participants who received normal mapping first presented a distance effect with pitch ascending but not descending as in Experiment 1, whereas participants who received reverse mapping first presented a distance effect with pitch descending but not ascending. We propose that the number and pitch dimensions of stimuli both activated spatial representations and that strategy shifts from quantity comparison to order processing were induced by spatial incongruities.

  13. Damage of the auditory system associated with acute blast trauma.

    Science.gov (United States)

    Roberto, M; Hamernik, R P; Turrentine, G A

    1989-05-01

    This paper reviews the results of several studies on the effects of blast wave exposure on the auditory system of the chinchilla, the pig, and the sheep. The chinchillas were exposed at peak sound pressure levels of approximately 160 dB under well-controlled laboratory conditions. A modified shock tube was used to generate the blast waves. The pigs and sheep were exposed under field conditions in an instrumented hard-walled enclosure. Blast trauma was induced by the impact of a single explosive projectile. The peak sound pressure levels varied between 178 and 209 dB. All animals were killed immediately following exposure, and their temporal bones were removed for fixation and histologic analysis using light microscopy and scanning electron microscopy. Middle ears were examined visually for damage to the conductive system. There were well-defined differences in susceptibility to acoustic trauma among species. However, common findings in each species were the acute mechanical fracture and separation of the organ of Corti from the basilar membrane, and tympanic membrane and ossicular failure.

  14. The TLC: a novel auditory nucleus of the mammalian brain.

    Science.gov (United States)

    Saldaña, Enrique; Viñuela, Antonio; Marshall, Allen F; Fitzpatrick, Douglas C; Aparicio, M-Auxiliadora

    2007-11-28

    We have identified a novel nucleus of the mammalian brain and termed it the tectal longitudinal column (TLC). Basic histologic stains, tract-tracing techniques and three-dimensional reconstructions reveal that the rat TLC is a narrow, elongated structure spanning the midbrain tectum longitudinally. This paired nucleus is located close to the midline, immediately dorsal to the periaqueductal gray matter. It occupies what has traditionally been considered the most medial region of the deep superior colliculus and the most medial region of the inferior colliculus. The TLC differs from the neighboring nuclei of the superior and inferior colliculi and the periaqueductal gray by its distinct connections and cytoarchitecture. Extracellular electrophysiological recordings show that TLC neurons respond to auditory stimuli with physiologic properties that differ from those of neurons in the inferior or superior colliculi. We have identified the TLC in rodents, lagomorphs, carnivores, nonhuman primates, and humans, which indicates that the nucleus is conserved across mammals. The discovery of the TLC reveals an unexpected level of longitudinal organization in the mammalian tectum and raises questions as to the participation of this mesencephalic region in essential, yet completely unexplored, aspects of multisensory and/or sensorimotor integration.

  15. Reorganization of auditory cortex in early-deaf people: functional connectivity and relationship to hearing aid use.

    Science.gov (United States)

    Shiell, Martha M; Champoux, François; Zatorre, Robert J

    2015-01-01

    Cross-modal reorganization after sensory deprivation is a model for understanding brain plasticity. Although it is a well-documented phenomenon, we still know little of the mechanisms underlying it or the factors that constrain and promote it. Using fMRI, we identified visual motion-related activity in 17 early-deaf and 17 hearing adults. We found that, in the deaf, the posterior superior temporal gyrus (STG) was responsive to visual motion. We compared functional connectivity of this reorganized cortex between groups to identify differences in functional networks associated with reorganization. In the deaf more than the hearing, the STG displayed increased functional connectivity with a region in the calcarine fissure. We also explored the role of hearing aid use, a factor that may contribute to variability in cross-modal reorganization. We found that both the cross-modal activity in STG and the functional connectivity between STG and calcarine cortex correlated with duration of hearing aid use, supporting the hypothesis that residual hearing affects cross-modal reorganization. We conclude that early auditory deprivation alters not only the organization of auditory regions but also the interactions between auditory and primary visual cortex and that auditory input, as indexed by hearing aid use, may inhibit cross-modal reorganization in early-deaf people.

  16. Subdivisions of the auditory midbrain (n. mesencephalicus lateralis, pars dorsalis in zebra finches using calcium-binding protein immunocytochemistry.

    Directory of Open Access Journals (Sweden)

    Priscilla Logerot

    Full Text Available The midbrain nucleus mesencephalicus lateralis pars dorsalis (MLd is thought to be the avian homologue of the central nucleus of the mammalian inferior colliculus. As such, it is a major relay in the ascending auditory pathway of all birds and in songbirds mediates the auditory feedback necessary for the learning and maintenance of song. To clarify the organization of MLd, we applied three calcium binding protein antibodies to tissue sections from the brains of adult male and female zebra finches. The staining patterns resulting from the application of parvalbumin, calbindin and calretinin antibodies differed from each other and in different parts of the nucleus. Parvalbumin-like immunoreactivity was distributed throughout the whole nucleus, as defined by the totality of the terminations of brainstem auditory afferents; in other words parvalbumin-like immunoreactivity defines the boundaries of MLd. Staining patterns of parvalbumin, calbindin and calretinin defined two regions of MLd: inner (MLd.I and outer (MLd.O. MLd.O largely surrounds MLd.I and is distinct from the surrounding intercollicular nucleus. Unlike the case in some non-songbirds, however, the two MLd regions do not correspond to the terminal zones of the projections of the brainstem auditory nuclei angularis and laminaris, which have been found to overlap substantially throughout the nucleus in zebra finches.

  17. Potassium currents in auditory hair cells of the frog basilar papilla.

    Science.gov (United States)

    Smotherman, M S; Narins, P M

    1999-06-01

    The whole-cell patch-clamp technique was used to identify and characterize ionic currents in isolated hair cells of the leopard frog basilar papilla (BP). This end organ is responsible for encoding the upper limits of a frog's spectral sensitivity (1.25-2.0 kHz in the leopard frog). Isolated BP hair cells are the smallest hair cells in the frog auditory system, with spherical cell bodies typically less than 20 microm in diameter and exhibiting whole-cell capacitances of 4-7 pF. Hair cell zero-current resting potentials (Vz) varied around a mean of -65 mV. All hair cells possessed a non-inactivating, voltage-dependent calcium current (I(Ca)) that activates above a threshold of -55 mV. Similarly all hair cells possessed a rapidly activating, outward, calcium-dependent potassium current (I(K)(Ca)). Most hair cells also possessed a slowly activating, outward, voltage-dependent potassium current (I(K)), which is approximately 80% inactive at the hair cell Vz, and a fast-activating, inward-rectifying potassium current (I(K1)) which actively contributes to setting Vz. In a small subset of cells I(K) was replaced by a fast-inactivating, voltage-dependent potassium current (I(A)), which strongly resembled the A-current observed in hair cells of the frog sacculus and amphibian papilla. Most cells have very similar ionic currents, suggesting that the BP consists largely of one homogeneous population of hair cells. The kinetic properties of the ionic currents present (in particular the very slow I(K)) argue against electrical tuning, a specialized spectral filtering mechanism reported in the hair cells of birds, reptiles, and amphibians, as a contributor to frequency selectivity of this organ. Instead BP hair cells reflect a generalized strategy for the encoding of high-frequency auditory information in a primitive, mechanically tuned, terrestrial vertebrate auditory organ.

  18. HAIR CELL-LIKE CELL GENERATION INDUCED BY NATURE CULTURE OF ADULT RAT AUDITORY EPITHELIUM

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Hair cells are the mechanosensory cells thatconvert sound and motion signals into electrical i m-pulses in cochlear and vestibular end organs of innerear.Although mature mammals nor mally do notgenerate new hair cells,recentin vivoandin vitrostudies have demonstrated mitotic activity and i m-mature-looking hair cells in mammalian vestibularepithelia after exposure to ototoxic drugs[1-3],sug-gesting that vestibular hair cell regeneration inmammals may be inducible.However,the possibil-ity of auditory hair ce...

  19. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  20. Oscillatory Cortical Network Involved in Auditory Verbal Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    van Lutterveld, Remko; Hillebrand, Arjan; Diederen, Kelly M. J.; Daalman, Kirstin; Kahn, Rene S.; Stam, Cornelis J.; Sommer, Iris E. C.

    2012-01-01

    Background: Auditory verbal hallucinations (AVH), a prominent symptom of schizophrenia, are often highly distressing for patients. Better understanding of the pathogenesis of hallucinations could increase therapeutic options. Magnetoencephalography (MEG) provides direct measures of neuronal activity

  1. Ion channel noise can explain firing correlation in auditory nerves.

    Science.gov (United States)

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels.

  2. Auditory hallucinations in childhood : associations with adversity and delusional ideation

    NARCIS (Netherlands)

    Bartels-Velthuis, A. A.; van de Willige, G.; Jenner, J. A.; Wiersma, D.; van Os, J.

    2012-01-01

    Background. Previous work suggests that exposure to childhood adversity is associated with the combination of delusions and hallucinations. In the present study, associations between (severity of) auditory vocal hallucinations (AVH) and (i) social adversity [traumatic experiences (TE) and stressful

  3. Modality specific neural correlates of auditory and somatic hallucinations

    Science.gov (United States)

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  4. Use of transcranial direct current stimulation for the treatment of auditory hallucinations of schizophrenia – a systematic review

    Directory of Open Access Journals (Sweden)

    Pondé PH

    2017-02-01

    Full Text Available Pedro H Pondé,1 Eduardo P de Sena,2 Joan A Camprodon,3 Arão Nogueira de Araújo,2 Mário F Neto,4 Melany DiBiasi,5 Abrahão Fontes Baptista,6,7 Lidia MVR Moura,8 Camila Cosmo2,3,6,9,10 1Dynamics of Neuromusculoskeletal System Laboratory, Bahiana School of Medicine and Public Health, 2Postgraduate Program in Interactive Process of Organs and Systems, Federal University of Bahia, Salvador, Bahia, Brazil; 3Laboratory for Neuropsychiatry and Neuromodulation and Transcranial Magnetic Stimulation Clinical Service, Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 4Scientific Training Center Department, School of Medicine of Bahia, Federal University of Bahia, Salvador, Bahia, Brazil; 5Neuromodulation Center, Spaulding Rehabilitation Hospital, Harvard Medical School, Boston, MA, USA; 6Functional Electrostimulation Laboratory, Biomorphology Department, 7Postgraduate Program on Medicine and Human Health, School of Medicine, Federal University of Bahia, Salvador, Bahia, Brazil; 8Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 9Center for Technological Innovation in Rehabilitation, Federal University of Bahia, 10Bahia State Health Department (SESAB, Salvador, Bahia, Brazil Introduction: Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective: The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods: A systematic review was performed

  5. Auditory short-term memory activation during score reading.

    Directory of Open Access Journals (Sweden)

    Veerle L Simoens

    Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  6. Contextual modulation of primary visual cortex by auditory signals

    Science.gov (United States)

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  7. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  8. Visual change detection recruits auditory cortices in early deafness.

    Science.gov (United States)

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  9. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2

    OpenAIRE

    2016-01-01

    Introduction “Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action” (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution a...

  10. Auditory stream formation affects comodulation masking release retroactively

    DEFF Research Database (Denmark)

    Dau, Torsten; Ewert, Stephan; Oxenham, A. J.

    2009-01-01

    in terms of the sequence of "postcursor" flanking bands forming a perceptual stream with the original flanking bands, resulting in perceptual segregation of the flanking bands from the masker. The results are consistent with the idea that modulation analysis occurs within, not across, auditory objects......, and that across-frequency CMR only occurs if the on-frequency and flanking bands fall within the same auditory object or stream....

  11. Auditory neuropathy spectrum disorder in a child with albinism

    Directory of Open Access Journals (Sweden)

    Mayur Bhat

    2016-01-01

    Full Text Available Albinism is a congenital disorder characterized by complete or partial absence of pigments in the skin, eyes, and hair due to the absence or defective melanin production. As a result of that, there will be disruption seen in auditory pathways along with other areas. Therefore, the aim of the present study is to highlight the underlying auditory neural deficits seen in albinism and discuss the role of audiologist in these cases.

  12. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  13. Auditory hypersensitivity in children and teenagers with autistic spectrum disorder

    OpenAIRE

    2004-01-01

    OBJECTIVE: To verify if the clinical behavior of auditory hypersensitivity, reported in interviews with parents/caregivers and therapists/teachers of 46 children and teenagers suffering from autistic spectrum disorder, correspond to audiological findings. METHOD: The clinical diagnosis for auditory hypersensitivity was investigated by means of an interview. Subsequently, a test of the acoustic stapedial reflex was conducted, and responses to intense acoustic stimulus in open field were observ...

  14. Auditory stream segregation in children with Asperger syndrome

    OpenAIRE

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E. (Eira); Nieminen-von Wendt, T.; Kujala, T. (Tiia)

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically deve...

  15. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  17. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  18. A unique cellular scaling rule in the avian auditory system.

    Science.gov (United States)

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.

  19. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  20. [Analysis of auditory information in the brain of the cetacean].

    Science.gov (United States)

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  1. Auditory Neuropathy: Findings of Behavioral, Physiological and Neurophysiological Tests

    Directory of Open Access Journals (Sweden)

    Mohammad Farhadi

    2006-12-01

    Full Text Available Background and Aim: Auditory neuropathy (AN can be diagnosed by abnormal auditory brainstem response (ABR, in the presence of normal cochlear microphonic (CM and otoacoustic emissions (OAEs.The aim of this study was to investigate the ABR and other electrodiagnostic test results of 6 patients suspicious to AN with problems in speech recognition. Materials and Methods: this cross sectional study was conducted on 6 AN patients with different ages evaluated by pure tone audiometry, speech discrimination score (SDS , immittance audiometry. ElectroCochleoGraphy , ABR, middle latency response (MLR, Late latency response (LLR, and OAEs. Results: Behavioral pure tone audiometric tests showed moderate to profound hearing loss. SDS was so poor which is not in accordance with pure tone thresholds. All patients had normal tympanogram but absent acoustic reflexes. CMs and OAEs were within normal limits. There was no contra lateral suppression of OAEs. None of cases had normal ABR or MLR although LLR was recorded in 4. Conclusion: All patients in this study are typical cases of auditory neuropathy. Despite having abnormal input, LLR remains normal that indicates differences in auditory evoked potentials related to required neural synchrony. These findings show that auditory cortex may play a role in regulating presentation of deficient signals along auditory pathways in primary steps.

  2. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  3. An anatomical and functional topography of human auditory cortical areas.

    Science.gov (United States)

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that-whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis-the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  4. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  5. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  6. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  7. Verrucous Carcinoma in External Auditory Canal – A Rare Case

    Directory of Open Access Journals (Sweden)

    Md Zillur Rahman

    2013-05-01

    Full Text Available Verrucous carcinoma is a variant of squamous cell carcinoma. It is of low grade malignancy and rarely present with distant metastasis. Oral cavity is the commonest site of this tumour, other sites are larynx, oesophagus and genitalia. Verrucous carcinoma in external auditory canal is extremely rare. This is the presentation of a 45 years old woman who came to the ENT & Head Neck Surgery department of Delta medical college, Dhaka, Bangladesh with discharging left ear and impairment of hearing on the same side for 7 years. Otoscopic examination showed a mass occupying almost whole of the external auditory canal and the overlying skin was thickened, papillary and blackish. Cytology from external auditory canal scrap showed hyperkeratosis and parakeratosis. External auditory canal bone was found eroded at some parts. Excision of the mass was done under microscope. Split thickness skin grafting was done in external auditory canal. The mass was diagnosed as verrucous carcinoma on histopathological examination. Afterwards she was given radiotherapy. Six months follow up showed no recurrence and healthy epithelialization of external auditory canal.

  8. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  9. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia.

    Science.gov (United States)

    Kuga, Hironori; Onitsuka, Toshiaki; Hirano, Yoji; Nakamura, Itta; Oribe, Naoya; Mizuhara, Hiroaki; Kanai, Ryota; Kanba, Shigenobu; Ueno, Takefumi

    2016-10-01

    Recent MRI studies have shown that schizophrenia is characterized by reductions in brain gray matter, which progress in the acute state of the disease. Cortical circuitry abnormalities in gamma oscillations, such as deficits in the auditory steady state response (ASSR) to gamma frequency (>30-Hz) stimulation, have also been reported in schizophrenia patients. In the current study, we investigated neural responses during click stimulation by BOLD signals. We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ), 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ), and 24 healthy controls (HC), assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  10. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  11. Auditory place theory and frequency difference limen

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jialu

    2006-01-01

    It has been a barrier that the place code is far too coarse a mechanism to account for the finest frequency difference limen for place theory of hearing since it was proposed in 19th century. A place correlation model, which takes the energy distribution of a pure tone in neighboring bands of auditory filters into full account, was presented in this paper. The model based on the place theory and some experimental results of the psychophysical tuning curves of hearing can explain the finest difference limen for frequency (about 0.02 or 0.3% at 1000 Hz)easily. Using a standard 1/3 octave filter bank of which the relationship between the frequency of a input pure tone apart from the centre frequency of K-th filter band, △f, and the output intensity difference between K-th and (K + 1)-th filters, △E, was established in order to show the fine frequency detection ability of the filter bank. This model can also be used to abstract the fundamental frequency of speech and to measure the frequency of pure tone precisely.

  12. Theory of Auditory Thresholds in Primates

    Science.gov (United States)

    Harrison, Michael J.

    2001-03-01

    The influence of thermal pressure fluctuations at the tympanic membrane has been previously investigated as a possible determinant of the threshold of hearing in humans (L.J. Sivian and S.D. White, J. Acoust. Soc. Am. IV, 4;288(1933).). More recent work has focussed more precisely on the relation between statistical mechanics and sensory signal processing by biological means in creatures' brains (W. Bialek, in ``Physics of Biological Systems: from molecules to species'', H. Flyvberg et al, (Eds), p. 252; Springer 1997.). Clinical data on the frequency dependence of hearing thresholds in humans and other primates (W.C. Stebbins, ``The Acoustic Sense of Animals'', Harvard 1983.) has long been available. I have derived an expression for the frequency dependence of hearing thresholds in primates, including humans, by first calculating the frequency dependence of thermal pressure fluctuations at eardrums from damped normal modes excited in model ear canals of given simple geometry. I then show that most of the features of the clinical data are directly related to the frequency dependence of the ratio of thermal noise pressure arising from without to that arising from within the masking bandwidth which signals must dominate in order to be sensed. The higher intensity of threshold signals in primates smaller than humans, which is clinically observed over much but not all of the human auditory spectrum is shown to arise from their smaller meatus dimensions. note

  13. Elastic modulus of cetacean auditory ossicles.

    Science.gov (United States)

    Tubelli, Andrew A; Zosuls, Aleks; Ketten, Darlene R; Mountain, David C

    2014-05-01

    In order to model the hearing capabilities of marine mammals (cetaceans), it is necessary to understand the mechanical properties, such as elastic modulus, of the middle ear bones in these species. Biologically realistic models can be used to investigate the biomechanics of hearing in cetaceans, much of which is currently unknown. In the present study, the elastic moduli of the auditory ossicles (malleus, incus, and stapes) of eight species of cetacean, two baleen whales (mysticete) and six toothed whales (odontocete), were measured using nanoindentation. The two groups of mysticete ossicles overall had lower average elastic moduli (35.2 ± 13.3 GPa and 31.6 ± 6.5 GPa) than the groups of odontocete ossicles (53.3 ± 7.2 GPa to 62.3 ± 4.7 GPa). Interior bone generally had a higher modulus than cortical bone by up to 36%. The effects of freezing and formalin-fixation on elastic modulus were also investigated, although samples were few and no clear trend could be discerned. The high elastic modulus of the ossicles and the differences in the elastic moduli between mysticetes and odontocetes are likely specializations in the bone for underwater hearing.

  14. Structured Counseling for Auditory Dynamic Range Expansion.

    Science.gov (United States)

    Gold, Susan L; Formby, Craig

    2017-02-01

    A structured counseling protocol is described that, when combined with low-level broadband sound therapy from bilateral sound generators, offers audiologists a new tool for facilitating the expansion of the auditory dynamic range (DR) for loudness. The protocol and its content are specifically designed to address and treat problems that impact hearing-impaired persons who, due to their reduced DRs, may be limited in the use and benefit of amplified sound from hearing aids. The reduced DRs may result from elevated audiometric thresholds and/or reduced sound tolerance as documented by lower-than-normal loudness discomfort levels (LDLs). Accordingly, the counseling protocol is appropriate for challenging and difficult-to-fit persons with sensorineural hearing losses who experience loudness recruitment or hyperacusis. Positive treatment outcomes for individuals with the former and latter conditions are highlighted in this issue by incremental shifts (improvements) in LDL and/or categorical loudness judgments, associated reduced complaints of sound intolerance, and functional improvements in daily communication, speech understanding, and quality of life leading to improved hearing aid benefit, satisfaction, and aided sound quality, posttreatment.

  15. Cholesteatoma invasion into the internal auditory canal.

    Science.gov (United States)

    Migirov, Lela; Bendet, Erez; Kronenberg, Jona

    2009-05-01

    Cholesteatoma invasion into the internal auditory canal (IAC) is rare and usually results in irreversible, complete hearing loss and facial paralysis on the affected side. This retrospective study examines the clinical characteristics of seven patients with cholesteatoma invading the IAC, analyzes possible routes of the cholesteatoma's extension and describes the surgical approaches used and patient outcome. Extension to the IAC was via the supralabyrinthine route in most patients. A subtotal petrosectomy, a translabyrinthine approach or a middle cranial fossa approach combined with radical mastoidectomy were required for the complete removal of the cholesteatoma. All seven patients presented with some preoperative facial nerve palsy. The facial nerve was decompressed in four patients and facial nerve repair was performed in three others, two by hypoglossal-facial anastomosis and one by a greater auricular nerve interposition grafting. All patients ended up with total deafness in the operate ear. At 1 year following surgery, the facial nerve function was House-Brackmann grade III in six cases and grade II in one. In conclusion, cholesteatoma invading the IAC is a separate entity with characteristic clinical presentations, require a unique surgical approach, and result in significant morbidity, such as total deafness in the operated ear and impaired facial movement.

  16. Expectation and attention in hierarchical auditory prediction.

    Science.gov (United States)

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M; Bekinschtein, Tristan A

    2013-07-03

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

  17. Theta oscillations accompanying concurrent auditory stream segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Urbán, Gábor; Winkler, István

    2016-08-01

    The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.

  18. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  19. 听觉皮层信号处理%Information processing in auditory cortex

    Institute of Scientific and Technical Information of China (English)

    王晓勤

    2009-01-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex include two types of important transformations: the non-isomorphic transformation of acoustic features and the transformation from acoustical to perceptual dimensions. Neural representations in auditory cortex are also modulated by auditory feedback and vocal control signals during speaking or vocalization. The challenges facing auditory neuroscientists and biomedical engineers are to understand neural coding mechanisms in the brain underlying such transformations. I will use recent findings from my laboratory to illustrate how acoustic information is processed in the primate auditory cortex and discuss its implications for neural processing of speech and music in the brain as well as for the design of neural prosthetic devices such as cochlear implants. We have used a combination of neurophysiological techniques and quantitative engineering tools to investigate these problems.%听觉系统和视觉系统的不同之处在于:听觉系统在外周感受器和听皮层间具有更长的皮层下通路和更多的突触联系.该特殊结构反应了听觉系统从复杂听觉环境中提取与行为相关信号的机制与其他感觉系统不同.听皮层神经信号处理包括两种重要的转换机制,声音信号的非同构转换以及从声音感受到知觉层面的转换.听觉皮层神经编码机制同时也受到听觉反馈和语言或发声过程中发声信号的调控.听觉神经科学家和生物医学工程师所面临的挑战便是如何去理解大脑中这些转换的编码机制.我将会用我实验

  20. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  1. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  2. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  3. CT findings of the osteoma of the external auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ha Young; Song, Chang Joon; Yoon, Chung Dae; Park, Mi Hyun; Shin, Byung Seok [Chungnam National University, School of Medicine, Daejeon (Korea, Republic of)

    2006-07-15

    We wanted to report the CT image findings of the osteoma of the external auditory canal. Temporal bone CT scanning was performed on eight patients (4 males and 4 females aged between 8 and 41 years) with pathologically proven osteoma of the external auditory canal after operation, and the findings of the CT scanning were retrospectively reviewed. Not only did we analyze the size, shape, distribution and location of the osteomas, we also analyzed the relationship between the lesion and the tympanosqumaous or tympanomastoid suture line, and the changes seen on the CT scan images for the patients who were able to undergo follow-up. All the lesions of the osteoma of the external auditory canal were unilateral, solitary, pedunculated bony masses. In five patients, the osteomas occurred on the left side and for the other three patients, the osteomas occurred on the right side. The average size of the osteoma was 0.6 cm with the smallest being 0.5 cm and the largest being 1.2 cm. Each of the lesions was located at the osteochondral junction in the terminal part of the osseous external ear canal. The stalk of the osteoma of the external auditory canal was found to have occurred in the anteroinferior wall in five cases (63%), in the anterosuperior wall (the tympanosqumaous suture line) in two cases (25%), and in the anterior wall in one case. The osteoma of the external auditory canal was a compact form in five cases and it was a cancellous form in three cases. One case of the cancellous form was changed into a compact form 35 months later due to the advanced ossification. Osteoma of the external auditory canal developed in a unilateral and solitary fashion. The characteristic image findings show that it is attached to the external auditory canal by its stalk. Unlike our common knowledge about its occurrence, osteoma mostly occurred in the tympanic wall, and this is regardless of the tympanosquamous or tympanomastoid suture line.

  4. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  5. The Effect of Neonatal Hyperbilirubinemia on the Auditory System

    Directory of Open Access Journals (Sweden)

    Dr. Zahra Jafari

    2008-12-01

    Full Text Available Background and Aim: Hyperbilirubinemia during the neonatal period is known to be an important risk factor for neonatal auditory impairment, and may reveal as a permanent brain damage, if no proper therapeutic intervention is considered. In the present study some electroacoustic and electrophysiologic tests were used to evaluate function of auditory system in a group of children with severe neonatal Jaundice. Materials and Methods: Forty five children with mean age of 16.1 14.81 months and 17 mg/dl and higher bilirubin level were studied, and the transient evoked otoacoustic emission, acoustic reflex, auditory brainstem response and auditory steady-state response tests were performed for them. Results: The mean score of bilirubin was 29.37 8.95 mg/dl. It was lower than 20 mg/dl in 22.2%, between 20-30 mg/dl in 24.4% and more than 30 mg/dl in 48.0% of children. No therapeutic intervention in 26.7%, phototherapy in 44.4%, and blood exchange in 28.9% of children were reported. 48.9% hypoxia and 26.6% preterm birth history was shown too. TEOAEs was recordable in 71.1% of cases. The normal result in acoustic reflex, ABR and ASSR tests was shown just in 11.1% of cases. The clinical symptoms of auditory neuropathy were revealed in 57.7% of children. Conclusion: Conducting auditory tests sensitive to hyperbilirubinemia place of injury is necessary to inform from functional effect and severity of disorder. Because the auditory neuropathy/ dys-synchrony is common in neonates with hyperbilirubinemic, the OAEs and ABR are the minimum essential tests to identify this disorder.

  6. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity.

    Science.gov (United States)

    Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J

    2017-03-01

    Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how

  7. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  8. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking.

    Science.gov (United States)

    Chang, An-Chieh; Lutfi, Robert; Lee, Jungmee; Heo, Inseok

    2016-09-18

    Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters). A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  9. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  10. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    Science.gov (United States)

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  11. Role of somatostatin receptor-2 in gentamicin-induced auditory hair cell loss in the Mammalian inner ear.

    Directory of Open Access Journals (Sweden)

    Yves Brand

    Full Text Available Hair cells and spiral ganglion neurons of the mammalian auditory system do not regenerate, and their loss leads to irreversible hearing loss. Aminoglycosides induce auditory hair cell death in vitro, and evidence suggests that phosphatidylinositol-3-kinase/Akt signaling opposes gentamicin toxicity via its downstream target, the protein kinase Akt. We previously demonstrated that somatostatin-a peptide with hormone/neurotransmitter properties-can protect hair cells from gentamicin-induced hair cell death in vitro, and that somatostatin receptors are expressed in the mammalian inner ear. However, it remains unknown how this protective effect is mediated. In the present study, we show a highly significant protective effect of octreotide (a drug that mimics and is more potent than somatostatin on gentamicin-induced hair cell death, and increased Akt phosphorylation in octreotide-treated organ of Corti explants in vitro. Moreover, we demonstrate that somatostatin receptor-1 knockout mice overexpress somatostatin receptor-2 in the organ of Corti, and are less susceptible to gentamicin-induced hair cell loss than wild-type or somatostatin-1/somatostatin-2 double-knockout mice. Finally, we show that octreotide affects auditory hair cells, enhances spiral ganglion neurite number, and decreases spiral ganglion neurite length.

  12. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  13. Rapid cortical dynamics associated with auditory spatial attention gradients.

    Science.gov (United States)

    Mock, Jeffrey R; Seay, Michael J; Charney, Danielle R; Holmes, John L; Golob, Edward J

    2015-01-01

    Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.

  14. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  15. The use of visual stimuli during auditory assessment.

    Science.gov (United States)

    Pearlman, R C; Cunningham, D R; Williamson, D G; Amerman, J D

    1975-01-01

    Two groups of male subjects beyond 50 years of age were given audiometric tasks with and without visual stimulation to determine if visual stimuli changed auditory perception. The first group consisted of 10 subjects with normal auditory acuity; the second, 10 with sensorineural hearing losses greater than 30 decibels. The rate of presentation of the visual stimuli, consisting of photographic slides of various subjects, was determined in experiment I of the study. The subjects, while viewing the slides at their own rate, took an audiotry speech discrimination test. Advisedly they changed the slides at a speed which they felt facilitated attention while performing the auditory task. The mean rate of slide-changing behavior was used as the "optimum" visual stimulation rate in experiment II, which was designed to explore the interaction of the bisensory presentation of stimuli. Bekesy tracings and Rush Hughes recordings were administered without and with visual stimuli, the latter presented at the mean rate of slide changes found in experiment I. Analysis of data indicated that (1) no statistically significant difference exists between visual and nonvisual conditions during speech discrimination and Bekesy testing; and (2) subjects did not believe that visual stimuli as presented in this study helped them to listen more effectively. The experimenter concluded that the various auditory stimuli encountered in the auditory test situation may actually be a deterrent to boredom because of the variety of tasks required in a testing situation.

  16. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  17. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  18. Brainstem auditory evoked potentials in children with lead exposure

    Directory of Open Access Journals (Sweden)

    Katia de Freitas Alvarenga

    2015-02-01

    Full Text Available Introduction: Earlier studies have demonstrated an auditory effect of lead exposure in children, but information on the effects of low chronic exposures needs to be further elucidated. Objective: To investigate the effect of low chronic exposures of the auditory system in children with a history of low blood lead levels, using an auditory electrophysiological test. Methods: Contemporary cross-sectional cohort. Study participants underwent tympanometry, pure tone and speech audiometry, transient evoked otoacoustic emissions, and brainstem auditory evoked potentials, with blood lead monitoring over a period of 35.5 months. The study included 130 children, with ages ranging from 18 months to 14 years, 5 months (mean age 6 years, 8 months ± 3 years, 2 months. Results: The mean time-integrated cumulative blood lead index was 12 µg/dL (SD ± 5.7, range:2.433. All participants had hearing thresholds equal to or below 20 dBHL and normal amplitudes of transient evoked otoacoustic emissions. No association was found between the absolute latencies of waves I, III, and V, the interpeak latencies I-III, III-V, and I-V, and the cumulative lead values. Conclusion: No evidence of toxic effects from chronic low lead exposures was observed on the auditory function of children living in a lead contaminated area.

  19. Prevalence of auditory changes in newborns in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Guimarães, Valeriana de Castro

    2012-01-01

    Full Text Available Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4% had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5% children who had appeared in the retest, 8 (30.8% had remained with absence and had been directed to the Otolaryngologist. Five (55.5% had appeared and had been examined by the doctor. Of these, 3 (75.0% had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE. Of the total of studied children, 198 (87.6% had had presence of emissions in one of the tests and, 2 (0.9% with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection.

  20. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  1. The auditory attention status in Iranian bilingual and monolingual people

    Directory of Open Access Journals (Sweden)

    Nayiere Mansoori

    2013-05-01

    Full Text Available Background and Aim: Bilingualism, as one of the discussing issues of psychology and linguistics, can influence the speech processing. Of several tests for assessing auditory processing, dichotic digit test has been designed to study divided auditory attention. Our study was performed to compare the auditory attention between Iranian bilingual and monolingual young adults. Methods: This cross-sectional study was conducted on 60 students including 30 Turkish-Persian bilinguals and 30 Persian monolinguals aged between 18 to 30 years in both genders. Dichotic digit test was performed on young individuals with normal peripheral hearing and right hand preference. Results: No significant correlation was found between the results of dichotic digit test of monolinguals and bilinguals (p=0.195, and also between the results of right and left ears in monolingual (p=0.460 and bilingual (p=0.054 groups. The mean score of women was significantly more than men (p=0.031. Conclusion: There was no significant difference between bilinguals and monolinguals in divided auditory attention; and it seems that acquisition of second language in lower ages has no noticeable effect on this type of auditory attention.

  2. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  3. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  4. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Frederic Venail

    2015-01-01

    Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  5. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants.

    Science.gov (United States)

    Venail, Frederic; Mura, Thibault; Akkari, Mohamed; Mathiolon, Caroline; Menjot de Champfleur, Sophie; Piron, Jean Pierre; Sicard, Marielle; Sterkers-Artieres, Françoise; Mondain, Michel; Uziel, Alain

    2015-01-01

    The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement), electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device). The electrical response, measured using auto-NRT (neural responses telemetry) algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = -0.11 ± 0.02, P < 0.01), the scalar placement of the electrodes (β = -8.50 ± 1.97, P < 0.01), and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF). Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  6. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  7. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  8. Training-induced plasticity of auditory localization in adult mammals.

    Directory of Open Access Journals (Sweden)

    Oliver Kacelnik

    2006-04-01

    Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.

  9. Speech identification and cortical potentials in individuals with auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Vanaja CS

    2008-03-01

    Full Text Available Abstract Background Present study investigated the relationship between speech identification scores in quiet and parameters of cortical potentials (latency of P1, N1, and P2; and amplitude of N1/P2 in individuals with auditory neuropathy. Methods Ten individuals with auditory neuropathy (five males and five females and ten individuals with normal hearing in the age range of 12 to 39 yr participated in the study. Speech identification ability was assessed for bi-syllabic words and cortical potentials were recorded for click stimuli. Results Results revealed that in individuals with auditory neuropathy, speech identification scores were significantly poorer than that of individuals with normal hearing. Individuals with auditory neuropathy were further classified into two groups, Good Performers and Poor Performers based on their speech identification scores. It was observed that the mean amplitude of N1/P2 of Poor Performers was significantly lower than that of Good Performers and those with normal hearing. There was no significant effect of group on the latency of the peaks. Speech identification scores showed a good correlation with the amplitude of cortical potentials (N1/P2 complex but did not show a significant correlation with the latency of cortical potentials. Conclusion Results of the present study suggests that measuring the cortical potentials may offer a means for predicting perceptual skills in individuals with auditory neuropathy.

  10. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  11. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  12. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  13. Auditory hair cell innervational patterns in lizards.

    Science.gov (United States)

    Miller, M R; Beck, J

    1988-05-22

    The pattern of afferent and efferent innervation of two to four unidirectional (UHC) and two to nine bidirectional (BHC) hair cells of five different types of lizard auditory papillae was determined by reconstruction of serial TEM sections. The species studies were Crotaphytus wislizeni (iguanid), Podarcis (Lacerta) sicula and P. muralis (lacertids), Ameiva ameiva (teiid), Coleonyx variegatus (gekkonid), and Mabuya multifasciata (scincid). The main object was to determine in which species and in which hair cell types the nerve fibers were innervating only one (exclusive innervation), or two or more hair cells (nonexclusive innervation); how many nerve fibers were supplying each hair cell; how many synapses were made by the innervating fibers; and the total number of synapses on each hair cell. In the species studies, efferent innervation was limited to the UHC, and except for the iguanid, C. wislizeni, it was nonexclusive, each fiber supplying two or more hair cells. Afferent innervation varied both with the species and the hair cell types. In Crotaphytus, both the UHC and the BHC were exclusively innervated. In Podarcis and Ameiva, the UHC were innervated exclusively by some fibers but nonexclusively by others (mixed pattern). In Coleonyx, the UHC were exclusively innervated but the BHC were nonexclusively innervated. In Mabuya, both the UHC and BHC were nonexclusively innervated. The number of afferent nerve fibers and the number of afferent synapses were always larger in the UHC than in the BHC. In Ameiva, Podarcis, and Mabuya, groups of bidirectionally oriented hair cells occur in regions of cytologically distinct UHC, and in Ameiva, unidirectionally oriented hair cells occur in cytologically distinct BHC regions.

  14. State-dependent changes in auditory sensory gating in different cortical areas in rats.

    Directory of Open Access Journals (Sweden)

    Renli Qi

    Full Text Available Sensory gating is a process in which the brain's response to a repetitive stimulus is attenuated; it is thought to contribute to information processing by enabling organisms to filter extraneous sensory inputs from the environment. To date, sensory gating has typically been used to determine whether brain function is impaired, such as in individuals with schizophrenia or addiction. In healthy subjects, sensory gating is sensitive to a subject's behavioral state, such as acute stress and attention. The cortical response to sensory stimulation significantly decreases during sleep; however, information processing continues throughout sleep, and an auditory evoked potential (AEP can be elicited by sound. It is not known whether sensory gating changes during sleep. Sleep is a non-uniform process in the whole brain with regional differences in neural activities. Thus, another question arises concerning whether sensory gating changes are uniform in different brain areas from waking to sleep. To address these questions, we used the sound stimuli of a Conditioning-testing paradigm to examine sensory gating during waking, rapid eye movement (REM sleep and Non-REM (NREM sleep in different cortical areas in rats. We demonstrated the following: 1. Auditory sensory gating was affected by vigilant states in the frontal and parietal areas but not in the occipital areas. 2. Auditory sensory gating decreased in NREM sleep but not REM sleep from waking in the frontal and parietal areas. 3. The decreased sensory gating in the frontal and parietal areas during NREM sleep was the result of a significant increase in the test sound amplitude.

  15. Characteristics of echolocating bats’auditory stereocilia length, compared with other mammals

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The stereocilia of the Organ of Corti in 4 different echolocating bats, Myotis adversus, Murina leuco-gaster, Nyctalus plancyi (Nyctalus velutinus), and Rhinolophus ferrumequinum were observed by using scanning electron microscopy (SEM). Stereocilia lengths were estimated for comparison with those of non-echolocating mammals. The specialized lengths of outer hair cells (OHC) stereocilia in echolocating bats were shorter than those of non-echolocating mammals. The specialized lengths of inner hair cells (IHC) stereocilia were longer than those of outer hair cells stereocilia in the Organ of Corti of echolocating bats. These characteristics of the auditory stereocilia length of echolocating bats represent the fine architecture of the electromotility process, helping to adapt to high frequency sound and echolocation.

  16. Characteristics of echolocating bats' auditory stereocilia length, compared with other mammals

    Institute of Scientific and Technical Information of China (English)

    YAO Qian; ZENG JinYao; ZHENG YongMei; Julia LATHAM; LIANG Bing; JIANG Lei; ZHANG ShuYi

    2007-01-01

    The stereocilia of the Organ of Corti in 4 different echolocating bats, Myotis adversus, Murina leucogaster, Nyctalus plancyi (Nyctalus velutinus), and Rhinolophus ferrumequinum were observed by using scanning electron microscopy (SEM). Stereocilia lengths were estimated for comparison with those of non-echolocating mammals. The specialized lengths of outer hair cells (OHC) stereocilia in echolocating bats were shorter than those of non-echolocating mammals. The specialized lengths of inner hair cells (IHC) stereocilia were longer than those of outer hair cells stereocilia in the Organ of Corti of echolocating bats. These characteristics of the auditory stereocilia length of echolocating bats represent the fine architecture of the electromotility process, helping to adapt to high frequency sound and echolocation.

  17. Preliminary Studies on Differential Expression of Auditory Functional Genes in the Brain After Repeated Blast Exposures

    Science.gov (United States)

    2012-01-01

    Army Medical Research and Materiel Command, Fort Detrick, MD Abstract—The mechanisms of central auditory processing involved in auditory/ vestibular ...trans- ducers in auditory neurons [22–23,45–48]. The frontal cor- tex and midbrain of blast-exposed mice showed significant increase in the expression of...auditory neurons [26]. Other types of molecules involved in calcium regula- tion, such as calreticulin and calmodulin-dependent pro- tein kinase expression

  18. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    OpenAIRE

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascendin...

  19. Auditory Memory deficit in Elderly People with Hearing Loss

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2013-06-01

    Full Text Available Introduction: Hearing loss is one of the most common problems in elderly people. Functional side effects of hearing loss are various. Due to the fact that hearing loss is the common impairment in elderly people; the importance of its possible effects on auditory memory is undeniable. This study aims to focus on the hearing loss effects on auditory memory.   Materials and Methods: Dichotic Auditory Memory Test (DVMT was performed on 47 elderly people, aged 60 to 80; that were divided in two groups, the first group consisted of elderly people with hearing range of 24 normal and the second one consisted of 23 elderly people with bilateral symmetrical ranged from mild to moderate Sensorineural hearing loss in the high frequency due to aging in both genders.   Results: Significant difference was observed in DVMT between elderly people with normal hearing and those with hearing loss (P

  20. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; Röder, Brigitte;

    2013-01-01

    Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while...... they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG...

  1. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  2. Temporal resolution in the hearing system and auditory evoked potentials

    DEFF Research Database (Denmark)

    Miller, Lee; Beedholm, Kristian

    2008-01-01

    3pAB5. Temporal resolution in the hearing system and auditory evoked potentials. Kristian Beedholm Institute of Biology,University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark, beedholm@mail.dk, Lee A. Miller Institute of Biology,University of Southern Denmark, Campusvej 55, 5230...... Odense M, Denmark, lee@biology.sdu.dkA popular type of investigation with auditory evoked potentials AEP consists of mapping the dependency of the envelope followingresponse to the AM frequency. This results in what is called the modulation rate transfer function MRTF. The physiologicalinterpretation...... of the MRTF is not straight forward, but is often used as a measure of the ability of the auditory system to encodetemporal changes. It is, however, shown here that the MRTF must depend on the waveform of the click-evoked AEP ceAEP, whichdoes not relate directly to temporal resolution. The theoretical...

  3. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  4. Auditory aura in frontal opercular epilepsy: sounds from afar.

    Science.gov (United States)

    Thompson, Stephen A; Alexopoulos, Andreas; Bingaman, William; Gonzalez-Martinez, Jorge; Bulacio, Juan; Nair, Dileep; So, Norman K

    2015-06-01

    Auditory auras are typically considered to localize to the temporal neocortex. Herein, we present two cases of frontal operculum/perisylvian epilepsy with auditory auras. Following a non-invasive evaluation, including ictal SPECT and magnetoencephalography, implicating the frontal operculum, these cases were evaluated with invasive monitoring, using stereoelectroencephalography and subdural (plus depth) electrodes, respectively. Spontaneous and electrically-induced seizures showed an ictal onset involving the frontal operculum in both cases. A typical auditory aura was triggered by stimulation of the frontal operculum in one. Resection of the frontal operculum and subjacent insula rendered one case seizure- (and aura-) free. From a hodological (network) perspective, we discuss these findings with consideration of the perisylvian and insular network(s) interconnecting the frontal and temporal lobes, and revisit the non-invasive data, specifically that of ictal SPECT.

  5. Spontaneous synchronized tapping to an auditory rhythm in a chimpanzee.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2013-01-01

    Humans actively use behavioral synchrony such as dancing and singing when they intend to make affiliative relationships. Such advanced synchronous movement occurs even unconsciously when we hear rhythmically complex music. A foundation for this tendency may be an evolutionary adaptation for group living but evolutionary origins of human synchronous activity is unclear. Here we show the first evidence that a member of our closest living relatives, a chimpanzee, spontaneously synchronizes her movement with an auditory rhythm: After a training to tap illuminated keys on an electric keyboard, one chimpanzee spontaneously aligned her tapping with the sound when she heard an isochronous distractor sound. This result indicates that sensitivity to, and tendency toward synchronous movement with an auditory rhythm exist in chimpanzees, although humans may have expanded it to unique forms of auditory and visual communication during the course of human evolution.

  6. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  7. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    OpenAIRE

    Badcock, Nicholas A.; Petroula Mousikou; Yatin Mahajan; Peter de Lissa; Johnson Thie; Genevieve McArthur

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system ha...

  8. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion.

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  9. Relationship between Selected Auditory and Visual Receptive Skills and Academic Achievement.

    Science.gov (United States)

    Bryant, Lynda Carol

    To observe the relationship of auditory and visual receptive skills to achievement in reading, 80 eight-year-old children were given a diagnostic test battery which examined three receptive skills--attention to stimuli, discrimination, and memory--within three sensory modalities--auditory, visual, and auditory-visual. The control group consisted…

  10. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  11. Plasticity in tinnitus patients : a role for the efferent auditory system?

    NARCIS (Netherlands)

    Geven, Leontien I.; Koeppl, Christine; de Kleine, Emile; van Dijk, Pim

    2014-01-01

    Hypothesis: The role of the corticofugal efferent auditory system in the origin or maintenance of tinnitus is currently mostly overlooked. Changes in the balance between excitation and inhibition after an auditory trauma are likely to play a role in the origin of tinnitus. The efferent auditory syst

  12. Bilateral Mandibular Condylar Fractures with Associated External Auditory Canal Fractures and Otorrhagia

    OpenAIRE

    Dang, David

    2016-01-01

    A rare case of bilateral mandibular condylar fractures associated with bilateral external auditory canal fractures and otorrhagia is reported. The more severe external auditory canal fracture was present on the side of high condylar fracture, and the less severe external auditory canal fracture was ipsilateral to the condylar neck fracture. A mechanism of injury is proposed to account for such findings.

  13. Bilateral Mandibular Condylar Fractures with Associated External Auditory Canal Fractures and Otorrhagia.

    Science.gov (United States)

    Dang, David

    2007-01-01

    A rare case of bilateral mandibular condylar fractures associated with bilateral external auditory canal fractures and otorrhagia is reported. The more severe external auditory canal fracture was present on the side of high condylar fracture, and the less severe external auditory canal fracture was ipsilateral to the condylar neck fracture. A mechanism of injury is proposed to account for such findings.

  14. Older adults' recognition of bodily and auditory expressions of emotion.

    Science.gov (United States)

    Ruffman, Ted; Sullivan, Susan; Dittrich, Winand

    2009-09-01

    This study compared young and older adults' ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults' were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions).

  15. Inversion of Auditory Spectrograms, Traditional Spectrograms, and Other Envelope Representations

    DEFF Research Database (Denmark)

    Decorsière, Remi Julien Blaise; Søndergaard, Peter Lempel; MacDonald, Ewen

    2015-01-01

    implementations of this framework are presented for auditory spectrograms, where the filterbank is based on the behavior of the basilar membrane and envelope extraction is modeled on the response of inner hair cells. One implementation is direct while the other is a two-stage approach that is computationally...... simpler. While both can accurately invert an auditory spectrogram, the two-stage approach performs better on time-domain metrics. The same framework is applied to traditional spectrograms based on the magnitude of the short-time Fourier transform. Inspired by human perception of loudness, a modification...

  16. Auditory streaming of tones of uncertain frequency, level, and duration.

    Science.gov (United States)

    Chang, An-Chieh; Lutfi, Robert A; Lee, Jungmee

    2015-12-01

    Stimulus uncertainty is known to critically affect auditory masking, but its influence on auditory streaming has been largely ignored. Standard ABA-ABA tone sequences were made increasingly uncertain by increasing the sigma of normal distributions from which the frequency, level, or duration of tones were randomly drawn. Consistent with predictions based on a model of masking by Lutfi, Gilbertson, Chang, and Stamas [J. Acoust. Soc. Am. 134, 2160-2170 (2013)], the frequency difference for which A and B tones formed separate streams increased as a linear function of sigma in tone frequency but was much less affected by sigma in tone level or duration.

  17. Formant compensation for auditory feedback with English vowels

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen N; Munhall, Kevin G;

    2015-01-01

    Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word...... to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see...

  18. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.

  19. Auditory brainstem responses predict auditory nerve fiber thresholds and frequency selectivity in hearing impaired chinchillas.

    Science.gov (United States)

    Henry, Kenneth S; Kale, Sushrut; Scheidt, Ryan E; Heinz, Michael G

    2011-10-01

    Noninvasive auditory brainstem responses (ABRs) are commonly used to assess cochlear pathology in both clinical and research environments. In the current study, we evaluated the relationship between ABR characteristics and more direct measures of cochlear function. We recorded ABRs and auditory nerve (AN) single-unit responses in seven chinchillas with noise-induced hearing loss. ABRs were recorded for 1-8 kHz tone burst stimuli both before and several weeks after 4 h of exposure to a 115 dB SPL, 50 Hz band of noise with a center frequency of 2 kHz. Shifts in ABR characteristics (threshold, wave I amplitude, and wave I latency) following hearing loss were compared to AN-fiber tuning curve properties (threshold and frequency selectivity) in the same animals. As expected, noise exposure generally resulted in an increase in ABR threshold and decrease in wave I amplitude at equal SPL. Wave I amplitude at equal sensation level (SL), however, was similar before and after noise exposure. In addition, noise exposure resulted in decreases in ABR wave I latency at equal SL and, to a lesser extent, at equal SPL. The shifts in ABR characteristics were significantly related to AN-fiber tuning curve properties in the same animal at the same frequency. Larger shifts in ABR thresholds and ABR wave I amplitude at equal SPL were associated with greater AN threshold elevation. Larger reductions in ABR wave I latency at equal SL, on the other hand, were associated with greater loss of AN frequency selectivity. This result is consistent with linear systems theory, which predicts shorter time delays for broader peripheral frequency tuning. Taken together with other studies, our results affirm that ABR thresholds and wave I amplitude provide useful estimates of cochlear sensitivity. Furthermore, comparisons of ABR wave I latency to normative data at the same SL may prove useful for detecting and characterizing loss of cochlear frequency selectivity.

  20. Auditory Rehabilitation in Rhesus Macaque Monkeys (Macaca mulatta) with Auditory Brainstem Implants

    Institute of Scientific and Technical Information of China (English)

    Zhen-Min Wang; Zhi-Jun Yang; Fu Zhao; Bo Wang; Xing-Chao Wang; Pei-Ran Qu; Pi-Nan Liu

    2015-01-01

    Background:The auditory brainstem implants (ABIs) have been used to treat deafness for patients with neurofibromatosis Type 2 and nontumor patients.The lack of an appropriate animal model has limited the study of improving hearing rehabilitation by the device.This study aimed to establish an animal model of ABI in adult rhesus macaque monkey (Macaca mulatta).Methods:Six adult rhesus macaque monkeys (M.mulatta) were included.Under general anesthesia,a multichannel ABI was implanted into the lateral recess of the fourth ventricle through the modified suboccipital-retrosigmoid (RS) approach.The electrical auditory brainstem response (EABR) waves were tested to ensure the optimal implant site.After the operation,the EABR and computed tomography (CT) were used to test and verify the effectiveness via electrophysiology and anatomy,respectively.The subjects underwent behavioral observation for 6 months,and the postoperative EABR was tested every two weeks from the 1st month after implant surgery.Result:The implant surgery lasted an average of 5.2 h,and no monkey died or sacrificed.The averaged latencies of peaks Ⅰ,Ⅱ and Ⅳ were 1.27,2.34 and 3.98 ms,respectively in the ABR.One-peak EABR wave was elicited in the operation,and one-or two-peak waves were elicited during the postoperative period.The EABR wave latencies appeared to be constant under different stimulus intensities;however,the amplitudes increased as the stimulus increased within a certain scope.Conclusions:It is feasible and safe to implant ABIs in rhesus macaque monkeys (M.mulatta) through a modified suboccipital RS approach,and EABR and CT are valid tools for animal model establishment.In addition,this model should be an appropriate animal model for the electrophysiological and behavioral study of rhesus macaque monkey with ABI.

  1. Participação do cerebelo no processamento auditivo Participation of the cerebellum in auditory processing

    Directory of Open Access Journals (Sweden)

    Patrícia Maria Sens

    2007-04-01

    Full Text Available O cerebelo era tradicionalmente visto como um órgão coordenador da motricidade, entretanto é atualmente considerado como um importante centro de integração de sensibilidades e coordenação de várias fases do processo cognitivo. OBJETIVO: é sistematizar as informações da literatura quanto à participação do cerebelo na percepção auditiva. MÉTODOS: foram selecionados na literatura trabalhos em animais sobre a fisiologia e anatomia das vias auditivas do cerebelo, além de trabalhos em humanos sobre diversas funções do cerebelo na percepção auditiva. Foram discutidos os achados da literatura, que há evidências que o cerebelo participa das seguintes funções cognitivas relacionadas à audição: geração verbal; processamento auditivo; atenção auditiva; memória auditiva; raciocínio abstrato; timing; solução de problemas; discriminação sensorial; informação sensorial; processamento da linguagem; operações lingüísticas. CONCLUSÃO: Foi constatado que são incompletas as informações sobre as estruturas, funções e vias auditivas do cerebelo.The cerebellum, traditionally conceived as a controlling organ of motricity, it is today considered an all-important integration center for both sensitivity and coordination of the various phases of the cognitive process. AIM: This paper aims at gather and sort literature information on the cerebellum’s role in the auditory perception. METHODS: We have selected animal studies of both the physiology and the anatomy of the cerebellum auditory pathway, as well as papers on humans discussing several functions of the cerebellum in auditory perception. As for the literature, it has been discussed and concluded that there is evidence that the cerebellum participates in many cognitive functions related to hearing: speech generation, auditory processing, auditory memory, abstract reasoning, timing, solution of problems, sensorial discrimination, sensorial information, language

  2. Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem.

    Science.gov (United States)

    Wallace, Matthew M; Harris, J Aaron; Brubaker, Donald Q; Klotz, Caitlyn A; Gabriele, Mark L

    2016-05-01

    Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph

  3. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

  4. Divergent roles for thyroid hormone receptor β isoforms in the endocrine axis and auditory system

    Science.gov (United States)

    Abel, E. Dale; Boers, Mary-Ellen; Pazos-Moura, Carmen; Moura, Egberto; Kaulbach, Helen; Zakaria, Marjorie; Lowell, Bradford; Radovick, Sally; Liberman, M. Charles; Wondisford, Fredric

    1999-01-01

    Thyroid hormone receptors (TRs) modulate various physiological functions in many organ systems. The TRα and TRβ isoforms are products of 2 distinct genes, and the β1 and β2 isoforms are splice variants of the same gene. Whereas TRα1 and TRβ1 are widely expressed, expression of the TRβ2 isoform is mainly limited to the pituitary, triiodothyronine-responsive TRH neurons, the developing inner ear, and the retina. Mice with targeted disruption of the entire TRβ locus (TRβ-null) exhibit elevated thyroid hormone levels as a result of abnormal central regulation of thyrotropin, and also develop profound hearing loss. To clarify the contribution of the TRβ2 isoform to the function of the endocrine and auditory systems in vivo, we have generated mice with targeted disruption of the TRβ2 isoform. TRβ2-null mice have preserved expression of the TRα and TRβ1 isoforms. They develop a similar degree of central resistance to thyroid hormone as TRβ-null mice, indicating the important role of TRβ2 in the regulation of the hypothalamic-pituitary-thyroid axis. Growth hormone gene expression is marginally reduced. In contrast, TRβ2-null mice exhibit no evidence of hearing impairment, indicating that TRβ1 and TRβ2 subserve divergent roles in the regulation of auditory function. PMID:10430610

  5. Educational evaluation. The first step toward understanding and remediation of central auditory disorders.

    Science.gov (United States)

    Knapp, R M

    1985-05-01

    Of all the problems experienced by children with learning disabilities, a language disorder may be the most detrimental to school performance. Because the problems of a child with a language disorder are frequently not recognized until he begins school, it is important that the educational clinician, teacher, related professional, and parents understand what a central auditory disorder is, that it may manifest itself as language disorder, and the way it can academically and emotionally affect a child. Evaluation and identification of a child with a central auditory disorder is vital at an early stage of development; however, testing, while it appears simple, is an extremely complex process and is not always exact. Therefore, the educational clinician must be skilled and understand the frailties which exist in the test instrument and the testing situation. It must be remembered, also, that testing in only part of the diagnostic procedure. Organized, perceptive classroom observations are essential. These must be followed by multidisciplinary meetings that generate remedial procedures and directions to be taken by parents and teachers. Finally, parents must be accepted by professionals as reasonable, concerned, and able to offer knowledgeable insight into their child's learning problems. If a language disorder is suspected, professional help should be sought immediately. Truth is better than fiction or fantasy in helping a child become a happy, adjusted, productive human being.

  6. Dynamic Range Adaptation to Spectral Stimulus Statistics in Human Auditory Cortex

    Science.gov (United States)

    Schlichting, Nadine; Obleser, Jonas

    2014-01-01

    Classically, neural adaptation refers to a reduction in response magnitude by sustained stimulation. In human electroencephalography (EEG), neural adaptation has been measured, for example, as frequency-specific response decrease by previous stimulation. Only recently and mainly based on animal studies, it has been suggested that statistical properties in the stimulation lead to adjustments of neural sensitivity and affect neural response adaptation. However, it is thus far unresolved which statistical parameters in the acoustic stimulation spectrum affect frequency-specific neural adaptation, and on which time scales the effects take place. The present human EEG study investigated the potential influence of the overall spectral range as well as the spectral spacing of the acoustic stimulation spectrum on frequency-specific neural adaptation. Tones randomly varying in frequency were presented passively and computational modeling of frequency-specific neural adaptation was used. Frequency-specific adaptation was observed for all presentation conditions. Critically, however, the spread of adaptation (i.e., degree of coadaptation) in tonotopically organized regions of auditory cortex changed with the spectral range of the acoustic stimulation. In contrast, spectral spacing did not affect the spread of frequency-specific adaptation. Therefore, changes in neural sensitivity in auditory cortex are directly coupled to the overall spectral range of the acoustic stimulation, which suggests that neural adjustments to spectral stimulus statistics occur over a time scale of multiple seconds. PMID:24381293

  7. IMPAIRED PROCESSING IN THE PRIMARY AUDITORY CORTEX OF AN ANIMAL MODEL OF AUTISM

    Directory of Open Access Journals (Sweden)

    Renata eAnomal

    2015-11-01

    Full Text Available Autism is a neurodevelopmental disorder clinically characterized by deficits in communication, lack of social interaction and, repetitive behaviors with restricted interests. A number of studies have reported that sensory perception abnormalities are common in autistic individuals and might contribute to the complex behavioral symptoms of the disorder. In this context, hearing incongruence is particularly prevalent. Considering that some of this abnormal processing might stem from the unbalance of inhibitory and excitatory drives in brain circuitries, we used an animal model of autism induced by valproic acid (VPA during pregnancy in order to investigate the tonotopic organization of the primary auditory cortex (AI and its local inhibitory circuitry. Our results show that VPA rats have distorted primary auditory maps with over-representation of high frequencies, broadly tuned receptive fields and higher sound intensity thresholds as compared to controls. However, we did not detect differences in the number of parvalbumin-positive interneurons in AI of VPA and control rats. Altogether our findings show that neurophysiological impairments of hearing perception in this autism model occur independently of alterations in the number of parvalbumin-expressing interneurons. These data support the notion that fine circuit alterations, rather than gross cellular modification, could lead to neurophysiological changes in the autistic brain.

  8. Changes in Electroencephalogram Approximate Entropy Reflect Auditory Processing and Functional Complexity in Frogs

    Institute of Scientific and Technical Information of China (English)

    Yansu LIU; Yanzhu FAN; Fei XUE; Xizi YUE; Steven E BRAUTH; Yezhong TANG; Guangzhan FANG

    2016-01-01

    Brain systems engage in what are generally considered to be among the most complex forms of information processing. In the present study, we investigated the functional complexity of anuran auditory processing using the approximate entropy (ApEn) protocol for electroencephalogram (EEG) recordings from the forebrain and midbrain while male and female music frogs (Babina daunchina) listened to acoustic stimuli whose biological significance varied. The stimuli used were synthesized white noise (reflecting a novel signal), conspecific male advertisement calls with either high or low sexual attractiveness (relfecting sexual selection) and silence (relfecting a baseline). The results showed that 1) ApEn evoked by conspeciifc calls exceeded ApEn evoked by synthesized white noise in the left mesencephalon indicating this structure plays a critical role in processing acoustic signals with biological signiifcance;2) ApEn in the mesencephalon was significantly higher than for the telencephalon, consistent with the fact that the anuran midbrain contains a large well-organized auditory nucleus (torus semicircularis) while the forebrain does not; 3) for females ApEn in the mesencephalon was signiifcantly different than that of males, suggesting that males and females process biological stimuli related to mate choice differently.

  9. Pairing tone trains with vagus nerve stimulation induces temporal plasticity in auditory cortex.

    Science.gov (United States)

    Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P

    2012-01-01

    The selectivity of neurons in sensory cortex can be modified by pairing neuromodulator release with sensory stimulation. Repeated pairing of electrical stimulation of the cholinergic nucleus basalis, for example, induces input specific plasticity in primary auditory cortex (A1). Pairing nucleus basalis stimulation (NBS) with a tone increases the number of A1 neurons that respond to the paired tone frequency. Pairing NBS with fast or slow tone trains can respectively increase or decrease the ability of A1 neurons to respond to rapidly presented tones. Pairing vagus nerve stimulation (VNS) with a single tone alters spectral tuning in the same way as NBS-tone pairing without the need for brain surgery. In this study, we tested whether pairing VNS with tone trains can change the temporal response properties of A1 neurons. In naïve rats, A1 neurons respond strongly to tones repeated at rates up to 10 pulses per second (pps). Repeatedly pairing VNS with 15 pps tone trains increased the temporal following capacity of A1 neurons and repeatedly pairing VNS with 5 pps tone trains decreased the temporal following capacity of A1 neurons. Pairing VNS with tone trains did not alter the frequency selectivity or tonotopic organization of auditory cortex neurons. Since VNS is well tolerated by patients, VNS-tone train pairing represents a viable method to direct temporal plasticity in a variety of human conditions associated with temporal processing deficits.

  10. A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors.

    Science.gov (United States)

    Vanarse, Anup; Osseiran, Adam; Rassau, Alexander

    2016-01-01

    Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.

  11. Automaticity and primacy of auditory streaming: Concurrent subjective and objective measures.

    Science.gov (United States)

    Billig, Alexander J; Carlyon, Robert P

    2016-03-01

    Two experiments used subjective and objective measures to study the automaticity and primacy of auditory streaming. Listeners heard sequences of "ABA-" triplets, where "A" and "B" were tones of different frequencies and "-" was a silent gap. Segregation was more frequently reported, and rhythmically deviant triplets less well detected, for a greater between-tone frequency separation and later in the sequence. In Experiment 1, performing a competing auditory task for the first part of the sequence led to a reduction in subsequent streaming compared to when the tones were attended throughout. This is consistent with focused attention promoting streaming, and/or with attention switches resetting it. However, the proportion of segregated reports increased more rapidly following a switch than at the start of a sequence, indicating that some streaming occurred automatically. Modeling ruled out a simple "covert attention" account of this finding. Experiment 2 required listeners to perform subjective and objective tasks concurrently. It revealed superior performance during integrated compared to segregated reports, beyond that explained by the codependence of the two measures on stimulus parameters. We argue that listeners have limited access to low-level stimulus representations once perceptual organization has occurred, and that subjective and objective streaming measures partly index the same processes.

  12. Microfluidic devices for imaging neurological response of Drosophila melanogaster larva to auditory stimulus.

    Science.gov (United States)

    Ghaemi, Reza; Rezai, Pouya; Iyengar, Balaji G; Selvaganapathy, Ponnambalam Ravi

    2015-02-21

    Two microfluidic devices (pneumatic chip and FlexiChip) have been developed for immobilization and live-intact fluorescence functional imaging of Drosophila larva's Central Nervous System (CNS) in response to controlled acoustic stimulation. The pneumatic chip is suited for automated loading/unloading and potentially allows high throughput operation for studies with a large number of larvae while the FlexiChip provides a simple and quick manual option for animal loading and is suited for smaller studies. Both chips were capable of significantly reducing the endogenous CNS movement while still allowing the study of sound-stimulated CNS activities of Drosophila 3rd instar larvae using genetically encoded calcium indicator GCaMP5. Temporal effects of sound frequency (50-5000 Hz) and intensity (95-115 dB) on CNS activities were investigated and a peak neuronal response of 200 Hz was identified. Our lab-on-chip devices can not only aid further studies of Drosophila larva's auditory responses but can be also adopted for functional imaging of CNS activities in response to other sensory cues. Auditory stimuli and the corresponding response of the CNS can potentially be used as a tool to study the effect of chemicals on the neurophysiology of this model organism.

  13. Behavioral modulation of neural encoding of click-trains in the primary and nonprimary auditory cortex of cats.

    Science.gov (United States)

    Dong, Chao; Qin, Ling; Zhao, Zhenling; Zhong, Renjia; Sato, Yu

    2013-08-07

    Neural representation of acoustic stimuli in the mammal auditory cortex (AC) has been extensively studied using anesthetized or awake nonbehaving animals. Recently, several studies have shown that active engagement in an auditory behavioral task can substantially change the neuron response properties compared with when animals were passively listening to the same sounds; however, these studies mainly investigated the effect of behavioral state on the primary auditory cortex and the reported effects were inconsistent. Here, we examined the single-unit spike activities in both the primary and nonprimary areas along the dorsal-to-ventral direction of the cat's AC, when the cat was actively discriminating click-trains at different repetition rates and when it was passively listening to the same stimuli. We found that the changes due to task engagement were heterogeneous in the primary AC; some neurons showed significant increases in driven firing rate, others showed decreases. But in the nonprimary AC, task engagement predominantly enhanced the neural responses, resulting in a substantial improvement of the neural discriminability of click-trains. Additionally, our results revealed that neural responses synchronizing to click-trains gradually decreased along the dorsal-to-ventral direction of cat AC, while nonsynchronizing responses remained less changed. The present study provides new insights into the hierarchical organization of AC along the dorsal-to-ventral direction and highlights the importance of using behavioral animals to investigate the later stages of cortical processing.

  14. Does the whistling thorn acacia (Acacia drepanolobium) use auditory aposematism to deter mammalian herbivores?

    Science.gov (United States)

    Lev-Yadun, Simcha

    2016-08-02

    Auditory signaling including aposematism characterizes many terrestrial animals. Auditory aposematism by which certain animals use auditory aposematic signals to fend off enemies is well known for instance in rattlesnakes. Auditory signaling by plants toward animals and other plants is an emerging area of plant biology that still suffers from limited amount of solid data. Here I propose that auditory aposematism operates in the African whistling thorn acacia (Acacia drepanolobium = Vachellia drepanolobium). In this tree, the large and hollow thorn bases whistle when wind blows. This type of aposematism compliments the well-known conspicuous thorn and mutualistic ant based aposematism during day and may operate during night when the conspicuous thorns are invisible.

  15. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  16. Can Children with (Central) Auditory Processing Disorders Ignore Irrelevant Sounds?

    Science.gov (United States)

    Elliott, Emily M.; Bhagat, Shaum P.; Lynn, Sharon D.

    2007-01-01

    This study investigated the effects of irrelevant sounds on the serial recall performance of visually presented digits in a sample of children diagnosed with (central) auditory processing disorders [(C)APD] and age- and span-matched control groups. The irrelevant sounds used were samples of tones and speech. Memory performance was significantly…

  17. Effect of stimulus hemifield on free-field auditory saltation.

    Science.gov (United States)

    Ishigami, Yoko; Phillips, Dennis P

    2008-07-01

    Auditory saltation is the orderly misperception of the spatial location of repetitive click stimuli emitted from two successive locations when the inter-click intervals (ICIs) are sufficiently short. The clicks are perceived as originating not only from the actual source locations, but also from locations between them. In two tasks, the present experiment compared free-field auditory saltation for 90 degrees excursions centered in the frontal, rear, left and right acoustic hemifields, by measuring the ICI at which subjects report 50% illusion strength (subjective task) and the ICI at which subjects could not distinguish real motion from saltation (objective task). A comparison of the saltation illusion for excursions spanning the midline (i.e. for frontal or rear hemifields) with that for stimuli in the lateral hemifields (left or right) revealed that the illusion was weaker for the midline-straddling conditions (i.e. the illusion was restricted to shorter ICIs). This may reflect the contribution of two perceptual channels to the task in the midline conditions (as opposed to one in the lateral hemifield conditions), or the fact that the temporal dynamics of localization differ between the midline and lateral hemifield conditions. A subsidiary comparison of saltation supported in the left and right auditory hemifields, and therefore by the right and left auditory forebrains, revealed no difference.

  18. Increased Auditory Startle Reflex in Children with Functional Abdominal Pain

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; Benninga, Marc A.; Koelman, Johannes H. T. M.; Tijssen, Marina A. J.

    2010-01-01

    Objective To test the hypothesis that children with abdominal pain-related functional gastrointestinal disorders have a general hypersensitivity for sensory stimuli. Study design Auditory startle reflexes were assessed in 20 children classified according to Rome III classifications of abdominal pain

  19. Speech Compensation for Time-Scale-Modified Auditory Feedback

    Science.gov (United States)

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  20. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  1. Exploring Auditory Saltation Using the "Reduced-Rabbit" Paradigm

    Science.gov (United States)

    Getzmann, Stephan

    2009-01-01

    Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal…

  2. Unilateral Auditory Neuropathy Caused by Cochlear Nerve Deficiency

    Directory of Open Access Journals (Sweden)

    Cheng Liu

    2012-01-01

    Full Text Available Objective. To explore possible corelationship between the cochlear nerve deficiency (CND and unilateral auditory neuropathy (AN. Methods. From a database of 85 patients with unilateral profound sensorineural hearing loss, eight who presented with evoked otoacoustic emissions (EOAEs or cochlear microphonic (CM in the affected ear were diagnosed with unilateral AN. Audiological and radiological records in eight patients with unilateral AN were retrospectively reviewed. Results. Eight cases were diagnosed as having unilateral AN caused by CND. Seven had type “A” tympanogram with normal EOAE in both ears. The other patient had unilateral type “B” tympanogram and absent OAE but CM recorded, consistent with middle ear effusion in the affected ear. For all the ears involved in the study, auditory brainstem responses (ABRs were either absent or responded to the maximum output and the neural responses from the cochlea were not revealed when viewed by means of the oblique sagittal MRI on the internal auditory canal. Conclusion. Cochlear nerve deficiency can be seen by electrophysiological evidence and may be a significant cause of unilateral AN. Inclined sagittal MRI of the internal auditory canal is recommended for the diagnosis of this disorder.

  3. The auditory startle response in post-traumatic stress disorder

    NARCIS (Netherlands)

    Siegelaar, S. E.; Olff, M.; Bour, L. J.; Veelo, D.; Zwinderman, A. H.; van Bruggen, G.; de Vries, G. J.; Raabe, S.; Cupido, C.; Koelman, J. H. T. M.; Tijssen, M. A. J.

    2006-01-01

    Post-traumatic stress disorder (PTSD) patients are considered to have excessive EMG responses in the orbicularis oculi (OO) muscle and excessive autonomic responses to startling stimuli. The aim of the present study was to gain more insight into the pattern of the generalized auditory startle reflex

  4. Music Genre Classification using an Auditory Memory Model

    DEFF Research Database (Denmark)

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Auditory Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual...

  5. Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.

    Science.gov (United States)

    Wetherby, Amy Miller; And Others

    1981-01-01

    The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)

  6. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.

    Science.gov (United States)

    Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P

    2005-05-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.

  7. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  8. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  9. Persistent fluctuations in stride intervals under fractal auditory stimulation

    NARCIS (Netherlands)

    Marmelat, V.C.M.; Torre, K.; Beek, P.J.; Daffertshofer, A.

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing isgenerally considered to reduce stride variability and may henc

  10. Active stream segregation specifically involves the left human auditory cortex.

    Science.gov (United States)

    Deike, Susann; Scheich, Henning; Brechmann, André

    2010-06-14

    An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.

  11. Biological Impact of Music and Software-Based Auditory Training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals--both young and old--encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in…

  12. Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?

    Science.gov (United States)

    Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno

    2012-01-01

    The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…

  13. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.

  14. Changes in otoacoustic emissions during selective auditory and visual attention.

    Science.gov (United States)

    Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis

    2015-05-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.

  15. Sprint starts and the minimum auditory reaction time.

    Science.gov (United States)

    Pain, Matthew T G; Hibbs, Angela

    2007-01-01

    The simple auditory reaction time is one of the fastest reaction times and is thought to be rarely less than 100 ms. The current false start criterion in a sprint used by the International Association of Athletics Federations is based on this assumed auditory reaction time of 100 ms. However, there is evidence, both anecdotal and from reflex research, that simple auditory reaction times of less than 100 ms can be achieved. Reaction time in nine athletes performing sprint starts in four conditions was measured using starting blocks instrumented with piezoelectric force transducers in each footplate that were synchronized with the starting signal. Only three conditions were used to calculate reaction times. The pre-motor and pseudo-motor time for two athletes were also measured across 13 muscles using surface electromyography (EMG) synchronized with the rest of the system. Five of the athletes had mean reaction times of less than 100 ms in at least one condition and 20% of all starts in the first two conditions had a reaction time of less than 100 ms. The results demonstrate that the neuromuscular-physiological component of simple auditory reaction times can be under 85 ms and that EMG latencies can be under 60 ms.

  16. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  17. Influence of Syllable Structure on L2 Auditory Word Learning

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  18. Characteristics of Auditory Processing Disorders : A Systematic Review

    NARCIS (Netherlands)

    de Wit, Ellen; Visser-Bochane, Margot I; Steenbergen, Bert; van Dijk, Pim; van der Schans, Cees P; Luinge, Margreet R

    2016-01-01

    Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit

  19. Characteristics of auditory processing disorders: A systematic review

    NARCIS (Netherlands)

    Wit, E. de; Visser-Bochane, M.I.; Steenbergen, B.; Dijk, P. van; Schans, C.P. van der; Luinge, M.R.

    2016-01-01

    Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit

  20. Vestibular receptors contribute to cortical auditory evoked potentials.

    Science.gov (United States)

    Todd, Neil P M; Paillard, Aurore C; Kluk, Karolina; Whittle, Elizabeth; Colebatch, James G

    2014-03-01

    Acoustic sensitivity of the vestibular apparatus is well-established, but the contribution of vestibular receptors to the late auditory evoked potentials of cortical origin is unknown. Evoked potentials from 500 Hz tone pips were recorded using 70 channel EEG at several intensities below and above the vestibular acoustic threshold, as determined by vestibular evoked myogenic potentials (VEMPs). In healthy subjects both auditory mid- and long-latency auditory evoked potentials (AEPs), consisting of Na, Pa, N1 and P2 waves, were observed in the sub-threshold conditions. However, in passing through the vestibular threshold, systematic changes were observed in the morphology of the potentials and in the intensity dependence of their amplitude and latency. These changes were absent in a patient without functioning vestibular receptors. In particular, for the healthy subjects there was a fronto-central negativity, which appeared at about 42 ms, referred to as an N42, prior to the AEP N1. Source analysis of both the N42 and N1 indicated involvement of cingulate cortex, as well as bilateral superior temporal cortex. Our findings are best explained by vestibular receptors contributing to what were hitherto considered as purely auditory evoked potentials and in addition tentatively identify a new component that appears to be primarily of vestibular origin.

  1. Implications of blast exposure for central auditory function: A review

    Directory of Open Access Journals (Sweden)

    Frederick J. Gallun, PhD

    2012-10-01

    Full Text Available Auditory system functions, from peripheral sensitivity to central processing capacities, are all at risk from a blast event. Accurate encoding of auditory patterns in time, frequency, and space are required for a clear understanding of speech and accurate localization of sound sources in environments with background noise, multiple sound sources, and/or reverberation. Further work is needed to refine the battery of clinical tests sensitive to the sorts of central auditory dysfunction observed in individuals with blast exposure. Treatment options include low-gain hearing aids, remote-microphone technology, and auditory-training regimens, but clinical evidence does not yet exist for recommending one or more of these options. As this population ages, the natural aging process and other potential brain injuries (such as stroke and blunt trauma may combine with blast-related brain changes to produce a population for which the current clinical diagnostic and treatment tools may prove inadequate. It is important to maintain an updated understanding of the scope of the issues present in this population and to continue to identify those solutions that can provide measurable improvements in the lives of Veterans who have been exposed to high-intensity blasts during the course of their military service.

  2. Listener Agreement for Auditory-Perceptual Ratings of Dysarthria

    Science.gov (United States)

    Bunton, Kate; Kent, Raymond D.; Duffy, Joseph R.; Rosenbek, John C.; Kent, Jane F.

    2007-01-01

    Purpose: Darley, Aronson, and Brown (1969a, 1969b) detailed methods and results of auditory-perceptual assessment for speakers with dysarthrias of varying etiology. They reported adequate listener reliability for use of the rating system as a tool for differential diagnosis, but several more recent studies have raised concerns about listener…

  3. Auditory and visual capture during focused visual attention

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.W.; Theeuwes, J.

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person’s visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies ha

  4. The Auditory Verbal Learning Test (Rey AVLT): An Arabic Version

    Science.gov (United States)

    Sharoni, Varda; Natur, Nazeh

    2014-01-01

    The goals of this study were to adapt the Rey Auditory Verbal Learning Test (AVLT) into Arabic, to compare recall functioning among age groups (6:0 to 17:11), and to compare gender differences on various memory dimensions (immediate and delayed recall, learning rate, recognition, proactive interferences, and retroactive interferences). This…

  5. Abnormal connectivity between attentional, language and auditory networks in schizophrenia

    NARCIS (Netherlands)

    Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre

    2012-01-01

    Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of aud

  6. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  7. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  8. Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations

    Science.gov (United States)

    Hudock, Daniel; Kalinowski, Joseph

    2014-01-01

    Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…

  9. Central projections of auditory receptor neurons of crickets.

    Science.gov (United States)

    Imaizumi, Kazuo; Pollack, Gerald S

    2005-12-19

    We describe the central projections of physiologically characterized auditory receptor neurons of crickets as revealed by confocal microscopy. Receptors tuned to ultrasonic frequencies (similar to those produced by echolocating, insectivorous bats), to a mid-range of frequencies, and a subset of those tuned to low, cricket-like frequencies have similar projections, terminating medially within the auditory neuropile. Quantitative analysis shows that despite the general similarity of these projections they are tonotopic, with receptors tuned to lower frequencies terminating more medially. Another subset of cricket-song-tuned receptors projects more laterally and posteriorly than the other types. Double-fills of receptors and identified interneurons show that the three medially projecting receptor types are anatomically well positioned to provide monosynaptic input to interneurons that relay auditory information to the brain and to interneurons that modify this ascending information. The more laterally and posteriorly branching receptor type may not interact directly with this ascending pathway, but is well positioned to provide direct input to an interneuron that carries auditory information to more posterior ganglia. These results suggest that information about cricket song is segregated into functionally different pathways as early as the level of receptor neurons. Ultrasound-tuned and mid-frequency tuned receptors have approximately twice as many varicosities, which are sites of transmitter release, per receptor as either anatomical type of cricket-song-tuned receptor. This may compensate in part for the numerical under-representation of these receptor types.

  10. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  11. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise.

  12. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  13. Transcranial direct current stimulation as a treatment for auditory hallucinations

    NARCIS (Netherlands)

    Koops, Sanne; van den Brink, Hilde; Sommer, Iris E C

    2015-01-01

    Auditory hallucinations (AH) are a symptom of several psychiatric disorders, such as schizophrenia. In a significant minority of patients, AH are resistant to antipsychotic medication. Alternative treatment options for this medication resistant group are scarce and most of them focus on coping with

  14. The impact of severity of hypertension on auditory brainstem responses

    Directory of Open Access Journals (Sweden)

    Gurdev Lal Goyal

    2014-07-01

    Full Text Available Background: Auditory brainstem response is an objective electrophysiological method for assessing the auditory pathways from the auditory nerve to the brainstem. The aim of this study was to correlate and to assess the degree of involvement of peripheral and central regions of brainstem auditory pathways with increasing severity of hypertension, among the patients of essential hypertension. Method: This study was conducted on 50 healthy age and sex matched controls (Group I and 50 hypertensive patients (Group II. Later group was further sub-divided into - Group IIa (Grade 1 hypertension, Group IIb (Grade 2 hypertension, and Group IIc (Grade 3 hypertension, as per WHO guidelines. These responses/potentials were recorded by using electroencephalogram electrodes on a root-mean-square electromyography, EP MARC II (PC-based machine and data were statistically compared between the various groups by way of one-way ANOVA. The parameters used for analysis were the absolute latencies of Waves I through V, interpeak latencies (IPLs and amplitude ratio of Wave V/I. Result: The absolute latency of Wave I was observed to be significantly increased in Group IIa and IIb hypertensives, while Wave V absolute latency was highly significantly prolonged among Group IIb and IIc, as compared to that of normal control group. All the hypertensives, that is, Group IIa, IIb, and IIc patients were found to have highly significant prolonged III-V IPL as compared to that of normal healthy controls. Further, intergroup comparison among hypertensive patients revealed a significant prolongation of Wave V absolute latency and III-V IPL in Group IIb and IIc patients as compared to Group IIa patients. These findings suggest a sensory deficit along with synaptic delays, across the auditory pathways in all the hypertensives, the deficit being more markedly affecting the auditory processing time at pons to midbrain (IPL III-V region of auditory pathways among Grade 2 and 3

  15. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  16. The effect of visual and auditory cues on seat preference in an opera theater.

    Science.gov (United States)

    Jeon, Jin Yong; Kim, Yong Hee; Cabrera, Densil; Bassett, John

    2008-06-01

    Opera performance conveys both visual and auditory information to an audience, and so opera theaters should be evaluated in both domains. This study investigates the effect of static visual and auditory cues on seat preference in an opera theater. Acoustical parameters were measured and visibility was analyzed for nine seats. Subjective assessments for visual-only, auditory-only, and auditory-visual preferences for these seat positions were made through paired-comparison tests. In the cases of visual-only and auditory-only subjective evaluations, preference judgment tests on a rating scale were also employed. Visual stimuli were based on still photographs, and auditory stimuli were based on binaural impulse responses convolved with a solo tenor recording. For the visual-only experiment, preference is predicted well by measurements taken related to the angle of seats from the theater midline at the center of the stage, the size of the photographed stage view, the visual obstruction, and the distance from the stage. Sound pressure level was the dominant predictor of auditory preference in the auditory-only experiment. In the cross-modal experiments, both auditory and visual preferences were shown to contribute to overall impression, but auditory cues were more influential than the static visual cues. The results show that both a positive visual-only or a positive auditory-only evaluations positively contribute to the assessments of seat quality.

  17. Auditory distraction transmitted by a cochlear implant alters allocation of attentional resources

    Directory of Open Access Journals (Sweden)

    Mareike eFinke

    2015-03-01

    Full Text Available Cochlear implants (CIs are auditory prostheses which restore hearing via electrical stimulation of the auditory nerve. The successful adaptation of auditory cognition to the CI input depends to a substantial degree on individual factors. We pursued an electrophysiological approach towards an analysis of cortical responses that reflect perceptual processing stages and higher-level responses to CI input. Performance and event-related potentials on two cross-modal discrimination-following-distraction tasks from CI users and normal-hearing (NH individuals were compared. The visual-auditory distraction task combined visual distraction with following auditory discrimination performance. Here, we observed similar cortical responses to visual distractors (Novelty-N2 and slowed, less accurate auditory discrimination performance in CI users when compared to NH individuals. Conversely, the auditory-visual distraction task was used to combine auditory distraction with visual discrimination performance. In this task we found attenuated cortical responses to auditory distractors (Novelty-P3, slowed visual discrimination performance, and attenuated cortical P3-responses to visual targets in CI users compared to NH individuals. These results suggest that CI users process auditory distractors differently than NH individuals and that the presence of auditory CI input has an adverse effect on the processing of visual targets and the visual discrimination ability in implanted individuals. We propose that this attenuation of the visual modality occurs through the allocation of neural resources to the CI input.

  18. Toward a neurobiology of auditory object perception: What can we learn from the songbird forebrain?

    Institute of Scientific and Technical Information of China (English)

    Kai LU; David S. VICARIO

    2011-01-01

    In the acoustic world,no sounds occur entirely in isolation; they always reach the ears in combination with other sounds.How any given sound is discriminated and perceived as an independent auditory object is a challenging question in neuroscience.Although our knowledge of neural processing in the auditory pathway has expanded over the years,no good theory exists to explain how perception of auditory objects is achieved.A growing body of evidence suggests that the selectivity of neurons in the auditory forebrain is under dynamic modulation,and this plasticity may contribute to auditory object perception.We propose that stimulus-specific adaptation in the auditory forebrain of the songbird (and perhaps in other systems) may play an important role in modulating sensitivity in a way that aids discrimination,and thus can potentially contribute to auditory object perception [Current Zoology 57 (6):671-683,2011].

  19. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    Science.gov (United States)

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  20. Relationship between Auditory and Cognitive Abilities in Older Adults.

    Directory of Open Access Journals (Sweden)

    Stanley Sheft

    Full Text Available The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults.Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer's Disease Center, participants were a community-dwelling cohort of older adults (range 63-98 years without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61 and White (n=63 subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities.Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance.For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials.

  1. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  2. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  3. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  4. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  5. Behavioral and EEG evidence for auditory memory suppression

    Directory of Open Access Journals (Sweden)

    Maya Elizabeth Cano

    2016-03-01

    Full Text Available The neural basis of motivated forgetting using the Think/No-Think (TNT paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG effects of auditory Think/No-Think in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response times during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: 1 a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500ms, and 2 a sustained Think positivity over parietal electrodes beginning at approximately 600ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition.The observed event-related potential pattern and theta power analysis are similar to that reported in visual Think/No-Think studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  6. Clinical presentation and audiologic findings in pediatric auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Navneet Gupta

    2014-01-01

    Full Text Available Aim: of the study was to rule out audiologic findings, related etiologies and its effect in pediatric patients having hearing deficits that are most likely due to a neuropathy of the eighth nerve. Study Design: Retrospective neo-natal hearing screening programme based. Subject and Methods: Subjects include 30 children aged from 0 yrs to 12 yrs, were tested with pure tone audiometry, behavioral observation audiometry, free-filed audiometry, speech audiometry, auditory brainstem response, and click evoked otoacoustic emissions. Results: Pure tone and free-field testing revealed 40 ears (66.67%, n = 60 with sloping type, sensorineural hearing loss, 20 ears (33.3%, n = 60 had flat configuration. Out of this 18 (6%, n = 30 subject showed bilateral similar configuration (either bilateral sloping type/ flat type of audiogram. Rest 12 (40%, n = 30 subject showed bilateral different pattern. 10 (33.3%, n = 30 children demonstrated fair to poor word discrimination scores and the other 2 (6.67%, n = 30 had fair to good word discrimination. For other rest of 18 (60%, n = 30 children speech test couldn′t be performed because of age limit and poor speech and language development. Out of 30 subjects 28 (93.3%, n = 30 showed normal distortion product Otoacoustic emissions and 2(6.67%, n = 30 subjects showed absent emissions. Conclusions: All thirty children demonstrated absent or marked abnormalities of brainstem auditory evoked potentials which suggest cochlear outer hair cell function is normal; mostly lesion is located at the eighth nerve or beyond. Generally auditory neuropathy is associated with different etiologies and it is difficult to diagnose auditory neuropathy with single audiological test; sufficient test of battery is required for complete assessment and diagnosis of auditory neuropathy

  7. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  8. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  9. Anatomy and Physiology of the Auditory Tracts

    OpenAIRE

    Mohammad hosein Hekmat Ara

    1999-01-01

    Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transf...

  10. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    Science.gov (United States)

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech.

  11. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain

    OpenAIRE

    Hiroaki Tsukano; Masao Horie; Ryuichi Hishida; Kuniyuki Takahashi; Hirohide Takebayashi; Katsuei Shibuki

    2016-01-01

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory ...

  12. Hwanggunchungyitang prevents cadmium-induced ototoxicity through suppression of the activation of caspase-9 and extracellular signal-related kinase in auditory HEI-OC1 cells.

    Science.gov (United States)

    Kim, Su-Jin; Shin, Bong-Gi; Choi, In-Young; Kim, Dong-Hyun; Kim, Min-Cheol; Myung, Noh-Yil; Moon, Phil-Dong; Lee, Jeong-Han; An, Hyo-Jin; Kim, Na-Hyung; Lee, Joo-Young; So, Hong-Seob; Park, Rae-Kil; Jeong, Hyun-Ja; Um, Jae-Young; Kim, Hyung-Min; Hong, Seung-Heon

    2009-02-01

    Hwanggunchungyitang (HGCYT) is a newly designed herbal drug formula for the purpose of treating auditory diseases. A number of heavy metals have been associated with toxic effects to the peripheral or central auditory system. Cadmium (Cd(2+)) is a heavy metal and a potent carcinogen implicated in tumor development through occupational and environmental exposure. However, the auditory effect of Cd(2+) is not poorly understood. The purpose of the present study was to investigate whether HGCYT prevent the ototoxic effects induced by Cd(2+) in auditory cell line, HEI-OC1. HGCYT inhibited the cell death, reactive oxygen species generation (ROS), activation of caspase-9, and extracellular signal-related kinase (ERK) induced by Cd(2+). In addition, we observed that cochlear hair cells in middle turn were damaged by Cd(2+). However, HGCYT prevented the destruction of hair cell arrays of the rat primary organ of Corti explants in the presence of Cd(2+). These results support the notion that ROS are involved in Cd(2+) ototoxicity and suggest HGCYT therapeutic usefulness, against Cd(2+)-induced activation of caspase-9 and ERK.

  13. Global dynamics of selective attention and its lapses in primary auditory cortex.

    Science.gov (United States)

    Lakatos, Peter; Barczak, Annamaria; Neymotin, Samuel A; McGinnis, Tammy; Ross, Deborah; Javitt, Daniel C; O'Connell, Monica Noelle

    2016-12-01

    Previous research demonstrated that while selectively attending to relevant aspects of the external world, the brain extracts pertinent information by aligning its neuronal oscillations to key time points of stimuli or their sampling by sensory organs. This alignment mechanism is termed oscillatory entrainment. We investigated the global, long-timescale dynamics of this mechanism in the primary auditory cortex of nonhuman primates, and hypothesized that lapses of entrainment would correspond to lapses of attention. By examining electrophysiological and behavioral measures, we observed that besides the lack of entrainment by external stimuli, attentional lapses were also characterized by high-amplitude alpha oscillations, with alpha frequency structuring of neuronal ensemble and single-unit operations. Entrainment and alpha-oscillation-dominated periods were strongly anticorrelated and fluctuated rhythmically at an ultra-slow rate. Our results indicate that these two distinct brain states represent externally versus internally oriented computational resources engaged by large-scale task-positive and task-negative functional networks.

  14. Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions.

    Science.gov (United States)

    de la Mothe, Lisa A; Blumell, Suzanne; Kajikawa, Yoshinao; Hackett, Troy A

    2012-05-01

    The current working model of primate auditory cortex is constructed from a number of studies of both new and old world monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organization has not been fully established. In previous studies, we focused on the architecture and connections of the core and medial belt areas (de la Mothe et al., 2006a, J Comp Neurol 496:27-71; de la Mothe et al., 2006b, J Comp Neurol 496:72-96). In this study, the corticocortical connections of the lateral belt and parabelt were examined in the marmoset. Tracers were injected into both rostral and caudal portions of the lateral belt and parabelt. Both regions revealed topographic connections along the rostrocaudal axis, where caudal areas of injection had stronger connections with caudal areas, and rostral areas of injection with rostral areas. The lateral belt had strong connections with the core, belt, and parabelt, whereas the parabelt had strong connections with the belt but not the core. Label in the core from injections in the parabelt was significantly reduced or absent, consistent with the idea that the parabelt relies mainly on the belt for its cortical input. In addition, the present and previous studies indicate hierarchical principles of anatomical organization in the marmoset that are consistent with those observed in other primates.

  15. Bioacoustic Signal Classification in Cat Auditory Cortex

    Science.gov (United States)

    1994-01-01

    representation as the input ( front end ) to a self- organizing signal classifier and as training pattern for the output of a dynamic neural network. In the...potential use as a front end for a biological based signal classifier, their use as a trainer for network models, and their ability to predict spatial...usually contributing more spikes, at best level, than monotonic neurons. d) One region in the center of the dorsal-ventral extent of czt Al appears to have

  16. Horseradish peroxidase dye tracing and embryonic statoacoustic ganglion cell transplantation in the rat auditory nerve trunk.

    Science.gov (United States)

    Palmgren, Björn; Jin, Zhe; Jiao, Yu; Kostyszyn, Beata; Olivius, Petri

    2011-03-04

    At present severe damage to hair cells and sensory neurons in the inner ear results in non-treatable auditory disorders. Cell implantation is a potential treatment for various neurological disorders and has already been used in clinical practice. In the inner ear, delivery of therapeutic substances including neurotrophic factors and stem cells provide strategies that in the future may ameliorate or restore hearing impairment. In order to describe a surgical auditory nerve trunk approach, in the present paper we injected the neuronal tracer horseradish peroxidase (HRP) into the central part of the nerve by an intra cranial approach. We further evaluated the applicability of the present approach by implanting statoacoustic ganglion (SAG) cells into the same location of the auditory nerve in normal hearing rats or animals deafened by application of β-bungarotoxin to the round window niche. The HRP results illustrate labeling in the cochlear nucleus in the brain stem as well as peripherally in the spiral ganglion neurons in the cochlea. The transplanted SAGs were observed within the auditory nerve trunk but no more peripheral than the CNS-PNS transitional zone. Interestingly, the auditory nerve injection did not impair auditory function, as evidenced by the auditory brainstem response. The present findings illustrate that an auditory nerve trunk approach may well access the entire auditory nerve and does not compromise auditory function. We suggest that such an approach might compose a suitable route for cell transplantation into this sensory cranial nerve.

  17. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  18. Tuning shifts of the auditory system by corticocortical and corticofugal projections and conditioning.

    Science.gov (United States)

    Suga, Nobuo

    2012-02-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and nonlemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation - comparable to repetitive tonal stimulation - of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a "differential" gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning.

  19. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  20. Areas of cat auditory cortex as defined by neurofilament proteins expressing SMI-32.

    Science.gov (United States)

    Mellott, Jeffrey G; Van der Gucht, Estel; Lee, Charles C; Carrasco, Andres; Winer, Jeffery A; Lomber, Stephen G

    2010-08-01

    The monoclonal antibody SMI-32 was used to characterize and distinguish individual areas of cat auditory cortex. SMI-32 labels non-phosphorylated epitopes on the high- and medium-molecular weight subunits of neurofilament proteins in cortical pyramidal cells and dendritic trees with the most robust immunoreactivity in layers III and V. Auditory areas with unique patterns of immunoreactivity included: primary auditory cortex (AI), second auditory cortex (AII), dorsal zone (DZ), posterior auditory field (PAF), ventral posterior auditory field (VPAF), ventral auditory field (VAF), temporal cortex (T), insular cortex (IN), anterior auditory field (AAF), and the auditory field of the anterior ectosylvian sulcus (fAES). Unique patterns of labeling intensity, soma shape, soma size, layers of immunoreactivity, laminar distribution of dendritic arbors, and labeled cell density were identified. Features that were consistent in all areas included: layers I and IV neurons are immunonegative; nearly all immunoreactive cells are pyramidal; and immunoreactive neurons are always present in layer V. To quantify the results, the numbers of labeled cells and dendrites, as well as cell diameter, were collected and used as tools for identifying and differentiating areas. Quantification of the labeling patterns also established profiles for ten auditory areas/layers and their degree of immunoreactivity. Areal borders delineated by SMI-32 were highly correlated with tonotopically-defined areal boundaries. Overall, SMI-32 immunoreactivity can delineate ten areas of cat auditory cortex and demarcate topographic borders. The ability to distinguish auditory areas with SMI-32 is valuable for the identification of auditory cerebral areas in electrophysiological, anatomical, and/or behavioral investigations.